xaminmo: Josh 2016 (Default)
If you have 6 filesystems backing a sequential access file storage pool, and you remove one filesystem, TSM cannot calculate free space properly.

Instead of looking at the free space of the remaining filesystems, it take the total space of the filesystems, minus the volumes in that device class.

Since there may still be old volumes in the "removed" directory, it considers the device class 100% full if everything currently existing cannot fit into the remaining directories.

Note that removing a directory from a device class does not invalidate the existing volumes in that directory. So long as the directory is still accessible, the volumes will be usable.

This is a problem when you want to reduce a filesystem but not migrate 100% off of it, as there is no other way to tell TSM not to allocate new volumes in that directory other than to remove that dir from the device class.

http://omnitech.net/reference/2014/01/07/tsm-file-class-design-issue/
xaminmo: Josh 2016 (Default)
So, /dev/sda is on the SIL-3132 add-in card which runs at UDMA/100 on single-lane PCIe.

The drive on here runs 25% faster than the other 4 identical drives on ICH7 internal ports at UDMA/133.

Also, when idle, the ST32000542AS drives make the click every 20-21 seconds.

I heard rumors that updating to CC35 to remove the clicks also prevents drop-outs under Linux MDRAID.

Vague rumors of it being related to a drive spindown when queried for SMART.

Seagate offers a bootable ISO for this, but it says, no, sorry, can't update your drive.

I run the command-line tool to update, and it works for the 4 on ICH7 ports, but the SIL3130 drives get garbled model/serial from query with their tool.

OK, well, losing the drives on the ICH7 is worse because it kills 2 drives at once due to weirdness with this intel desktop board. I'll leave it with the one on CC34.

Then, I go to run smartctl and it fails to get any info... I'm freaking out, pissed, etc...

and then I realize I'm not root.

I R SMRT!

SFP GBIC

May. 12th, 2010 08:27 pm
xaminmo: (Josh 1975 Complain)
I've encountered many people who call a GBIC by the name of SFP. SFP means "Small Form-factor Pluggable". It's an adjective, not a noun. You cannot have "an SFP. You CAN have an SFP transceiver. An SFP transceiver has a standard, card-edge, copper interface.

The SFP transceiver may be also a GBIC, a "GigaBit Interface Converter" which converts from a standard copper interface into a fibre interface. Usually this is an LC (Lucent/Local/Little Connector) but for 10GBFC, it is more commonly an SC (Standard/Subscriber/Square/Siemon Connector). The SFP transceiver may instead use a copper cable for short distance communication (patching between switches).

Other valid terminology:
* GBIC means "GigaBit Interface Converter, and can be used as a noun, all by itself.
* SFP GBIC is also valid, since SFP modifies GBIC.
* SFP to SFP cable is valid, because the adjective is describing the cable end.
* SFP plug, or SFP end, though that doesn't actually define a component; it describes half a component.

If you wanted to fully describe a fibre connection module, you would call it something like an SFP to LC GBIC. This says it's an interface converter, and one side is SFP, and the other side is LC Fibre. (There are other fibre plugs than LC.) This device is the #1 assumed device if you refer to it simply as a GBIC.

That is all.

This is a recording.

*Beep*
xaminmo: (Computer Drive)
OCZ Technology announced the Company's plan to enter mass production of the Z-Drive R2 Solid State Drive (SSD) Series. The Z-Drive family is a bootable 8-way RAID 0, and connects via an 8-lane PCI Express slot. It claims to be the only bootable and field serviceable PCIe SSD option on the market today. The NAND modules are socketed to enable field-upgrade.

Amazon prices range from $1268 for a 250 GB m84, to $10,300 for a 2TB p88. The eSeries is not available yet.

Common Specifications:
NAND Flash Components: Multi-Level Cell (MLC) NAND Flash Memory
Interface: PCI Express
Form Factor: x8 slot full height PCI Express
Life Expectancy: 1 million hours Mean Time Before Failure (MTBF)
Reliability: ECC is BCH with 8, 12 or 16 bits correctable, depending on NAND
Product Health Monitoring: Self-Monitoring, Analysis and Reporting Technology
Operating Temperature: 0C - 70C
Storage Temperature: -45C - +85C
Certifications: RoHS, CE, FCC
Performance Optimization: Background Garbage Collection (GC)

The 84 modules Specifics:

Cache: 256MB on board Cache
Max Read: up to 800MB/s, Max Write: up to 750MB/s, Sustained Write: up to 500MB/s
Max Random Write IOPS: 7500 IOPS (4KB 32QD), Max Random Read IOPS: 29000 IOPS (4KB 32QD)
Power Consumption: Active: 12Watts

The 88 modules specifics:

Cache: 512MB on board Cache
Max Read: up to 1.4GB/s, Max Write: up to 1.4GB/s, Sustained Write: up to 950MB/s
Max Random Write IOPS: 14500 IOPS (4KB 32QD), Max Random Read IOPS: 29000 IOPS (4KB 32QD)
Power Consumption: Active: 20Watts

Press Release:
http://www.oczenterprise.com/news/ocz-technology-launches-next-generation-z-drive-pci-express-solid-state-drive-ssd.html


Question and Answer with Patrick Kiley, WW OEM Sales Mgr for OCZ Technology
JD: So the individual modules are roughly 200MB/sec read and write, and you have 4 or 8 mounted on an 8-lane PCI express SATA controller?

OCZ> This is extremely close to being correct. We have 4 or 8 SSD controllers mounted on a board using an LSI 1068E SAS controller.

JD: Is the RAID-0 transparent, or will drivers be required?
OCZ: Drivers are required for the 1068E to be identified by the OS, however the RAID0 is done in hardware and is transparent.

JD: What's the form factor of the board?
OCZ: The P84 boards are full height and close to full length. The P88 boards are the same, but double width.

JD: Will it need a bracket on the tail end if mounted horizontally?
OCZ: Some form of extra support would be suggested for horizontal applications (1u, 2u as examples). Since each platform is different OCZ does not have an off the shelf solution. We would be happy to work with customers on a case by case basis where the business makes sense.

JD: Will this be MLC, SLC or a combination?
OCZ: M84, P84, and P88 are MLC. E84 and E88 which will be launching soon are SLC

JD: How much RAM cache?
OCZ: The LSI 1068E does not have an external cache on it. Each of the SSD controllers has 64MB of cache.

JD: Does the long term performance require TRIM to be enabled?
OCZ: This depends on how the drive is used. If its used for sequential writes then no it should be fine without TRIM. If the drive is relatively full to capacity and has a lot of small file random writes then TRIM is very helpful in maintaining performance. Currently TRIM does not work because the individual controllers are behind the RAID controller in an array. OCZ is working on developing drivers that will enable TRIM, as well as a consolidated SMART reporting

JD: Will the modules be hot-swappable?
OCZ: No

JD: Will in-place capacity upgrades be supported? (eg, replace one bank of modules, let it sync up, then replace the other bank)
OCZ: No, all modules will need to be replaced at the same time. Keep in mind this is RAID0 so regardless, data is lost.

JD: Will other RAID levels be supported than RAID0? If so, is the processing done on the board, or in the OS? If it's on the board, what sort of CPU/DSP is involved?
OCZ: Not in this generation no, RAID0 only and processing is done in the 1068E.
xaminmo: Josh 2016 (Default)
"Exabyte was acquired by Tandberg Data on November 20, 2006"

You know, ADIC was acquired by Quantum in May of 2006.

WTF. I'm oblivious.

Also, IBM Total Storage TS3200 and Dell PowerVault TL4000... Both very sneaky and secretive in the OEM of this one.

I thought maybe it was ADIC since the 3576/TS3310 was ADIC and the RMU has the same look and feel.

Nope. None of the ADIC/Quantum libraries have the same element numbers.

Maybe Exabyte?? Nope.

Tandberg Data. It's like a cross between the T40 and the T48. The base T40 is 24 slots, with two 12 slot rails on each side. Then, there's a slimmer set of expansion rails that add the other 16 slots. The T40+ allows you to stack 5 of these together for 188 slots total (and 10 LTO3/LTO4 drives). The T48 has 48 slots, and is a little different.

The T40 has slots starting at 4096 and has 3 I/O ports, exactly the same as the TS3200. The TS3200 has 4 extra slots than the T40, and the squares in each corner are the same size.

So this is sort of a hybrid or a ++ model that Dell and IBM resell, but that Tandberg didn't see the benefit of selling on its own.

But, the question arises: How do I refer to the OEM product? I want to call it a T44, but that doesn't "Really" exist. It's electronically a T40, with a tiny bit of extra space. Technically it DOES have 48 slots: 1 cleaner, 3 I/O, and 44 data. But the element numbers match the T40 and not the T48.

To make things worse, they bought Exabyte, but tandbergdata.com is all pointed back into Exabyte. Much of the important data is missing for the T* series.

/me likes to whine.

USB key.

Aug. 7th, 2007 05:15 pm
xaminmo: Josh 2016 (Default)
PNY 16g USB key for $150 from Amazon

I'm still amazed at the price per gig changes over time.
xaminmo: Josh 2016 (Default)
One of my customers had a disk array catch fire late last week. The system they were using is 100% hot-swap.

I wonder if someone replaced a battery improperly. Each controller has a lead-acid battery, and if rigged up improperly, or if the cannister were crushed such that the electrodes grounded out, it would definitely catch fire. It'd be pretty tough for main-power to catch fire without shutting down the internal breaker/thermal overload.

Aparently, the array crashed a week ago and all data were lost. This would be consistent with a battery short. That would cause the cache controller too lose its data.

The only other time I've seen equip catch fire was when things were stuffed into the vent holes that didn't belong there, or when a defiant CE insisted that a High Node (R50 in an SP frame) definitely did have hot-swap power supplies when it definitely did not.

My sister did work with a customer that had toxic green goo oozing out of their unix server in a basement datacenter. Turns out the combo of humidity, airflow, and improper clearance from a high-end German print/copy system was to blame. The fumes from the printer were aparently acidic, and the floor vents were such that the exhaust went into the unix system.

So anyway, I may be going to assist them with rebuild on Monday. I'll find out in the AM.
Replacement is DS4800 with 22TB in 300GB, 15kRPM drives
This will be arrayed and then fed into an SVC.
The SVC will farm this out to 41 dev servers.
There will be mapping and zoning galore.

If the servers were already on the SVC, there should be tags for the hosts already.
Hopefully they'll have a list of servers handy for me.

Since they lost the data, this means they're not using snapshots in the SVC.
Once the storage is made available to the hosts, there will be recovery via TSM.

My biggest concern is that they lost the entire storage subsystem. I mean, this is 5 drawers, each a steel case. They lost the entire thing. This sounds fishy to me. I guess they probably poorly designed their arrays, and ended up losing enough parity to be unrecoverable. I can't imagine that even a smouldering fire would go unnoticed in a proper datacenter long enough to destroy 5 drawers.

They also had other issues last year. They were using old Brocade switches with old firmware. They would update zoning during midday loads and they would have I/O errors. For single-pathed and AS/400 systems, they would lose the LUNs altogether.

I think there is room for policy and procedural enhancements which will increase stability and employee satisfaction. I'd love for us to go an IT architecture review for them.

Since I've given details here, I can't say who or where, but it'll be hourly and long hours to get things set up as quickly as possible.

I probably should reserve a room in advance, just in case.
xaminmo: Josh 2016 (Default)
find . -exec md5sum {} \; | sort | uniq -Dw33 

Maybe it's less overhead to use:
md5deep -zlr | sort | uniq -Dw43
Long assed rant )
xaminmo: Josh 2016 (Default)
I upgraded my backup server from 300gb drives to 500gb drives.

Retention is back up to 30 days for obsolete files and 90 days for deleted files.

The old disks go into my linux box, but for some reason the SATA RAID controller still presents the disks separately, even after I built an array in BIOS.

Hrm...
xaminmo: Josh 2016 (Default)
ServeRAID = Adaptec HostRAID
DS300 and DS400 enclosures are Adaptec also
http://www.adaptec.com/ibm/downloads/

Unrelated SAS controllers:
http://www.adaptec.com/en-US/products/sas/host/
xaminmo: Josh 2016 (Default)
TAPE DETAILS:
------    ------------------------------------------------
$9600     40 slot, single drive LTO0 library incl shipping
$2600     40 LTO3 new tapes incl shipping
$150      SCSI controller 
------    ------------------------------------------------
$12,350   Total for LTO3 library, 40 tapes  (16TB plus compression)
          (8TB with tape copies)
          Sequential access only, with up to 2 minutes access time.


DISK DETAILS:
--------------- = ---- = ---------------------------
425 + 25        = $450 = 16-port SATA card
                   225 = 8-port SATA card
$160 + $60 + 45 = $265 = 4-drive enclosure
$1549.09        = totl = 4x 750GB SATA drives
--------------- = ---- = ---------------------------
$3855		8 drives = 5.85TB
$3855		8 drives = 5.85TB
$3855		8 drives = 5.85TB
$785		PC with gigabit ethernet, 2 boot disk, , 2gb, linux iscsi target
------    ------------------------------------------------
$12,350   Total for PC system to run 17.57TB
          (15.38TB with 3 parity disks)
          (13.18TB with 3 parity, 3 hot spare)
          (8.78TB with 1 to 1 mirroring)
          All live and random access
xaminmo: Josh 2016 (Default)
So, aparently, on April 29, I decided it would be good to send wget after VLDB.org (Very Large DataBases). I'd been doing research into R* databases, and this site had alot of good conference documentation.

So fire and forget, right?

So, today, I'm sorting my uploads directory and I find the dir. It took me a little but to realize what it was.

So then, it seems like alot of files. Maybe I should just RAR it up, right?

Hrm... It's taking a long time.

After some checking, it turns out that the dir has 403,348 files totalling 4.17GB.

And I'm having trouble deciding to delete it. We'll see how big it is RARed up.
xaminmo: Josh 2016 (Default)
I know this shows my ultimate dork-geek-ness, but I keep thinking I need to write a quick-sort routine that uses tapeutil and will sort the tapes in an IBM library by barcode.

*sigh*
xaminmo: Josh 2016 (Leenooks)
ENV: Debian testing, unstable, experimental
  • Linux ns1 2.6.14-2-686-smp #2 SMP Fri Dec 9 10:20:41 UTC 2005 i686 GNU/Linux
  • sda, sdb and sdd are 9.1g WD 7200RPM
  • sdc is 173g Seagate
  • the others are 50g seagate w/o domain validation support.
  • Everything's on a dual channel adaptec U160 card.

PROBLEM: non-multipath SCSI devices other than the boot device become
unavailable if multipath-tools is installed as show by this output:Read more... )

TSM

Nov. 12th, 2005 10:51 am
xaminmo: Josh 2016 (Default)
So, I looked, and while I had TSMS5320 on disk, the installed version was 5300.
Doh.
auditdb ADMIN took about 2 mins
I'm doing diskstorage now, I think
then archstorage and inventory.
Here's an example )

Wheee

Nov. 11th, 2005 09:28 am
xaminmo: Josh 2016 (Default)
ANR0102E dsalloc.c Error 1 inserting row in table DS.Segments</PRE

So I GUESS I should go ahead and upgrade the server to not base level 5.3.

PTAM

Aug. 15th, 2005 02:59 pm
xaminmo: Josh 2016 (Barcode)
So, I'm reading a book with some Disaster Recovery terms in it. All looks familiar and overly basic, but then I see "PTAM". I assumed this was some mainframe thing, but then I see it defined

Pickup Truck Access Method

AKA, to get your offsite data, you have to have someone bring it to you. :)

Whee

Aug. 9th, 2005 11:31 am
xaminmo: Josh 2016 (Default)
OK, so I'm feeling a bit dumb. Trying to pull up stuff from 3.5 years ago is proving dificult.

However, today, Shelby had a customer who's nightly backup took around 4 hours, but the restore was taking over 24. In 4 hours, it had restored 5.6g.
Read more... )
So, in some small number of minutes, we'd restored the directories, mounted the tape, and restored over 6g of data.

yAy!
xaminmo: Josh 2016 (IT 'R Us Itrus Technologies)
Amdahl LVS 4800 aka Fujitsu LVS 4800
What I found inside and other technical specs )
We have several of these, and I'd like to play more as I get time.
Any first-hand experience in usage, hacking, etc would be appreciated.

Profile

xaminmo: Josh 2016 (Default)
xaminmo

July 2017

S M T W T F S
      1
23 45678
9 1011 12131415
16171819202122
23242526272829
3031     

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jul. 22nd, 2017 12:51 pm
Powered by Dreamwidth Studios