IU Data Center standards

This document outlines Indiana University Data Center guidelines and standards, including equipment installations, Data Center access, and operational procedures.

All UITS staff and external departmental staff who have equipment responsibilities in the Data Centers should accept the terms and responsibilities outlined in this document.

On this page:


1. Requesting installation

1.1 Space request: Prior to your submitting a proposal for new equipment, you'll need to begin initial discussions regarding machine room space. Once you submit a machine room space request form, the Data Center Operations Manager and the electrical engineer will discuss and look at available space and determine a location suitable for the equipment. The purpose of the discussion is to address environmental issues (for example, equipment BTU and power specifications), space, and floor location. The floor location may be determined based on environmental data.

2. Acquisition guidelines

2.1 Rack-mounted devices: Ensure that you're purchasing rack-mounted equipment; everything needs to be either mounted in the rack or in its own proprietary cabinet (as opposed to free standing). The Operations Manager must approve any exceptions.

2.2 Equipment specifications: Upon making your equipment selections, include the vendor specification when submitting your space request form.

2.3 Power: Request power for each device from the electrical engineer. Power requests will be handled as follows:

IUPUI: Two rack-mounted cabinet distribution units (CDU) will be installed in each standard rack. These CDUs utilize 208V IEC C13 and C19 outlets. Your hardware will need to operate at 208V and have the proper power cords. Installation of CDUs can take up to a month, so please request power as early as possible. For non-standard or proprietary racks, twist-lock receptacles shall be provided under the floor for connection of user-supplied CDUs.

IUB: In order to maintain Uptime Institute Tier III requirements, two rack-mounted CDUs fed from different power sources will be supplied in each rack. In order to assist with load balancing, you should purchase hardware with 208V IEC C13 or C19 plugs whenever possible. Please do not plug hardware into any receptacle without authorization of the electrical engineer.

2.4 Receiving and placement: When large, heavy equipment is involved, you'll need to make arrangements for receiving the order. It is also your responsibility to arrange for the equipment to be moved to the proper place in the machine room. In emergency situations, Operations staff will act as the contact and receive equipment/parts after normal dock hours.

2.5 Equipment arrival notification: Receiving dock personnel will notify you of equipment arrival (unless otherwise arranged).

3. Equipment installation

3.1 Data Center staging area:

IUPUI: There is very little to no staging area outside of the machine room, so you'll need to use machine room space to uncrate/unpack and prepare equipment for installation, and you'll need to move/install all items to their intended location within two weeks. If you cannot do so within this time period, make other arrangements through UITS Facilities or Operations. (This will not guarantee other storage locations.)

IUB: A staging area for equipment is located just off the dock. This space is for uncrating/unpacking and preparing equipment for installation. This area is only for temporary storage; you have two weeks to move/install all items to their intended location. If you cannot do so within this time period, make other arrangements through UITS Facilities or Operations. (This will not guarantee other storage locations.)

3.2 Cabinet design: Unless the server manufacturer specifically dictates that the equipment must be housed in their proprietary cabinet, all servers will be installed in the standard cabinets provided by Operations. You'll need to submit proof of vendor proprietary cabinet requirements to SAG. Such cabinets should have front, back, and side panels. Unless someone is working in the unit, cabinet doors must remain closed.

3.3 Cabinet/Rack: Operations will label the cabinet (both front and rear) with the unique floor grid location and with the power circuit serving that unit.

Equipment spacing within the cabinet/rack should allow appropriate airflow for proper cooling. Blanking panels will need to be installed to fill in all vacant rack space.

3.4 UPS: No rack-mounted uninterruptible power supplies (UPSes) will be allowed. The Enterprise Data Centers will provide backup power.

3.5 KVM solutions: Rack-mounted monitors and keyboard trays are required only if you need KVM. Cabling between racks is not allowed.

3.6 Hardware identification: Supply Operations with the appropriate fully qualified server names, and they will label all equipment within the cabinets so that hardware is easily identifiable. You will need prior approval from the Operations Manager and Communications Office to display any signage.

3.7 Disposal of refuse: The person or team installing a device is responsible for the disposal of all refuse (e.g., cardboard, Styrofoam, plastic, pallets). Please see that you remove any refuse and, if possible, recycle any cardboard from the IUPUI machine room and IUB Data Center staging area on a daily basis.

3.8 Combustible and flammable material: Do not leave combustible materials in the machine rooms; such materials include cardboard, wood, and plastic, as well as manuals and books. This also prohibits the use of wooden tables/shelves.

3.9 Review installation: The person requesting installation should arrange with the Operations Manager for a final review of equipment installation to ensure that appropriate policies and procedures are implemented before the equipment becomes production ready.

3.10 Negotiations: Any negotiations and exceptions must be arranged between the system owners and the Operations Manager and approved by the Director of Enterprise Infrastructure and the relevant director or officer of the area involved.

3.11 Replacement parts: All onsite replacement parts should be stored in a storage cabinet or on storage shelves in the storeroom (e.g., for use by service vendors such as IBM or Service Express). Make any necessary storage arrangements with Facilities or the Operations Manager.

4. Equipment removal

4.1 When a new system is replacing a pre-existing machine, the old system must be properly decommissioned via the Change Management process. Submit a request to CNI for the removal of firewall rules for machines that are decommissioned.

4.2 Removal of old hardware must be coordinated with the UITS Facilities Manager and follow all appropriate policy, standards, and guidelines relating to data destruction, wiring removal, and component disposition.

4.3 Please be sure to include all of the appropriate capital asset transfers.

4.4 The cost of removal is borne by the owner, and all equipment must be removed no later than 30 days after it has been decommissioned. Exceptions to the 30-day removal period require approval by Facilities or the Operations Managers.

5. Operations procedures

5.1 Data Center access: Due to the sensitive nature of the data and computing systems maintained within its facilities, security and access are important aspects of the OVPIT/UITS environment. In most cases, the university is contractually and legally obligated to limit access to only those who have IT responsibilities requiring frequent access.

Security cameras are located throughout OVPIT/UITS buildings. These cameras record footage for follow-up in the case of a security incident. They also provide an effective deterrence function in the safe operation of the building.

UITS staff with responsibilities in the Data Center may gain access through an arrangement between the department manager and Operations. Users with IU Login credentials can make requests via the Data Center Access Request form.

External users that do not have IU Login credentials will receive instructions from the Data Center on how to make requests via the Affiliate datacenter access request form.

Persons other than full-time UITS staff are permitted in the Data Center only under one of the following conditions:

  1. They are full-time staff of vendors providing services to UITS. Contract consultants or service representatives may be authorized by prior arrangement with Operations.
  2. They are full-time staff of Indiana University working on a system owned by an IU department and housed in the Data Center under terms specified in a Colocation Agreement. Access will be granted in situations requiring hands-on system administration, not simply because a system is present on a machine in the Data Center.
  3. They are full-time or contracted staff of a non-IU entity that owns a system housed in the Data Center under terms specified in a Colocation Agreement. Again, access will granted when hands-on system administration is necessary, not simply because a system is present on a machine in the Data Center.
  4. They are escorted by a full-time UITS staff member as part of a tour of the facilities.

ID badges and access cards will be provided for those individuals who meet criterion A, B, or C. The ID badges must be worn and visible during visits to the Data Center. All staff who meet criterion A, B, or C are expected to sign into the Data Center through Operations before entering the room and to sign out upon exiting.

PIN pad badge readers are installed at both Data Centers. A 6-digit PIN will be required when you fill out your Data Center access request form annually.

5.2 Equipment registration: Equipment must be registered in the machine room inventory. Email dcops@indiana.edu if you experience problems accessing the Data Center Inventory System.

5.3 Essential information: The system owner will enter the essential information into the Data Center Inventory System and update that information if it changes. Essential information includes:

  • System hardware: A complete description of the machine's hardware configuration, including vendor, model, on-board memory, and secondary storage media vendor/type
  • System software: A complete description of the machine's software configuration, including operating system vendor, version, patch level, and other major software components on the system
  • System function: A complete description of the machine's function (the service that it provides)
  • System recovery: Accurate startup and shutdown procedures and special information relating to crash or other emergency recovery situations
  • On-call notification: Primary and secondary system contacts and schedules, plus contact information for the manager supporting the system. (Please provide prior to production date.)
  • Vendor and maintenance contract: Vendor contact information, including information related to the maintenance/service contract and warranty. (The Operations Manager will assist in negotiating maintenance contracts on behalf of UITS, but budgets for ongoing maintenance should be managed in the individual units.)

5.4 Change Management: The system manager or system administrator needs to participate in the Change Management process by representing the deployment of a new production system before implementation. At the start of fall and spring semesters, a Limited Change Period of approximately two weeks occurs; dates are posted at UITS Change Management .

The following types of changes will be allowed during the Limited Change Period following standard Change Management procedure:

  • Routine patches, security patches, and bug fixes for applications
  • Low-risk application deployments with limited impact on user experience

Essential Changes (previously called Emergency Changes) not represented in Change Management must continue to receive director approval and be communicated to the Change Management email list. These are changes that cannot wait until the Limited Change Period ends and are deemed important enough to assume the risk of making the change.

The following types of changes will generally not be allowed during the Limited Change Period (unless deemed an Essential Change by the director and communicated to the Change Management email list):

  • Medium- and high-risk changes requiring significant changes and testing
  • Anything impacting the usability, functionality, or availability of certain key services required for a smooth semester start-up. This includes:
    • Tier0 and Tier1 services as defined by the Incident Management Team
    • CrimsonCard
    • Google at IU
    • Box
    • IU Print (except to load allotments for the new semester)

5.5 Monitoring tools: Network monitoring tools will scan incoming machines as appropriate. Please supply the network address and any special considerations for the monitoring mode.

5.6 Security: All servers are expected to be administered in a secure manner using the industry best practices in the policy Security of Information Technology Resources (IT-12), including employment of host-based firewalls and operation behind the machine room firewall. You are expected to properly review and formally acknowledge all relevant security polices and standards. For more information, see Stay safe online.

5.7 System backups: Best practices should include the implementation of a proper system backup schedule. This includes the deployment of incremental, full, and archive program processes as needed. You must use proven, supported backup software and apply appropriate standards for off-site backup data for production systems.

Enterprise Infrastructure offers a data backup service as part of the Intelligent Infrastructure suite of services.

5.8 Data Center tours: All tours of the machine room must be scheduled with the Operations Manager.

6. Data Center networking

System classification:

  • IU Enterprise Systems: Indiana University systems using IU IP addressing that reside in the enterprise environments at the IUPUI and IUB data centers. These systems use the standard top-of-rack (TOR) switching configuration.
  • Non-IU Colocation Systems: Systems that are using only Indiana University data center space, power, and cooling. These external customers are not using IU IP address space and networking.
  • IU Research Systems: Indiana University systems that are located primarily on the IU research networks. Physical placement lies within the areas designated as research environments at the IUPUI and IUB data centers.

6.1 Campus network availability: All enterprise racks come standard with one switch installed at the top of the cabinet to provide 48 ports of 10/100/1000 Mbps Ethernet connections into the Data Center switching infrastructure. 10 G or additional 1 G switches are available by request at an additional cost. All public and private Ethernet connections are to be provided by UITS unless special circumstances are reviewed and approved by Campus Network Engineering (CNE). This policy applies to Enterprise and any Research System using IU campus networks.

6.1.5 VLAN availability: Data Center VLANs can only be extended between the IUB or IUPUI Data Centers. Data Center VLANs cannot be extended to other areas on campus.

6.2 Projects/RFPs: If you are embarking on a new project, include Campus Network Engineering (CNE) in these discussions. They can help assist you with the networking requirements and ensure they are compatible with our existing network design and will achieve the performance you require. Contact noc@indiana.edu to schedule a meeting with these individuals.

6.3 Requesting network connectivity: Entire Data Center VLANs/subnets are allocated to departments/teams specifically. Data Center VLANs/subnets are designed to not be shared across multiple departments/teams. If your department does not have a Data Center VLAN/subnet yet, contact noc@indiana.edu to request one. Once you have a VLAN/subnet assigned, you must request network switch ports by going to the Telecom Request Page and selecting Data Center – Networking and then Switch Port Activation.

Static IP addresses must be assigned by using the Campus Network Portal (CNP). More information regarding the management of static IPs with the CNP can be found in You do not have sufficient permission to view this document..

This policy applies to Enterprise and any Research System located in the enterprise environment using IU campus networks.

6.4 Firewall security: Request firewall rules via the Campus Network Portal. The firewall request page includes additional information on firewall policies and standards. A full explanation of firewall policies and best practices can be found in Section 7 of this document.

6.5 Rack Ethernet switches: All enterprise environment switches in the Data Center will be managed by CNI. System administrators shall not manage their own switches. This applies to any Ethernet switch located in a rack in the enterprise environment. This policy also includes private switches that are not designed to connect to the campus network switches.

Blade chassis switches are allowed in the enterprise environment in certain cases. If you intend to install a chassis server environment, please contact noc@indiana.edu to schedule a meeting with a campus networks Data Center engineer to discuss the chassis networking.

6.6 Internal rack wiring: Internal rack wiring should follow rack cabinet management standards. Cables should be neatly dressed. All cables should be properly labeled so they can be easily identified. Refer to TIA/EIA-942 Infrastructure Standard for Data Centers, section 5.11; a copy is available in the Operations Center. This policy applies to all users. Users in the Enterprise environment are not allowed to run cables outside of the racks.

6.7 Sub-floor copper/fiber wiring requests: All data cabling under the floor, including SAN and Ethernet, must be installed by CNI in a cable tray. Any requests for sub-floor copper or fiber can be made via the UITS Telecommunications site. CNI can supply copper, single-mode fiber, and multi-mode fiber connectivity between locations. This applies to anyone with systems in the Data Center. The requester is responsible for paying for these special requests. CNI can provide an estimate for any work requested.

6.8 Server network adapter bridging: Server administrators are not permitted to use any form of software-based network adapter bridging. Attempts to bridge traffic between two server interfaces are subject to automated detection and shutdown. This policy applies to Enterprise and any Research System using IU campus networks.

6.9 Server network adapter teaming/trunk/aggregate links: Server administrators may use the teaming of network adapters to increase bandwidth and redundancy. CNI can also setup static LACP trunks on the switch when the aggregation of links is required.

6.10 SAN fiber channel (FC) switches: CNI does not provide the management or hardware for SAN switches. Administrators are allowed to install and manage their own SAN switches. CNI can provide fiber trunks outside of the racks as needed (policy 6.7).

6.11 Multicast: Multicast is available by request only. Multicast functionality should not be assumed to work until requested and tested with a network engineer.

6.12 Jumbo frames: Jumbo frames are available by request. Jumbo frame functionality should not be assumed to work until requested and tested with a network engineer.

6.13 Tagged VLANs: CNI can provide VLAN tagging when needed. VLAN tagging (trunking) can be used to aggregate VLANS over a common physical link to environments such as VMware, Hyper-V, etc.

6.14 IPv6: IPv6 is available in the data centers by request. IPv6 functionality should not be assumed to work until requested and tested with a network engineer. Note: IPv6 is already enabled in most IU Bloomington and IUPUI academic buildings.

7. Data Center network firewalls

This section applies to:

  • Enterprise Systems: Indiana University enterprise systems using IU IP addressing that reside in the enterprise environments at the IUPUI and IUB data centers. These systems utilize the standard top-of-rack (TOR) switching configuration.
  • Firewall Security: Provide firewall configuration requests via the Campus Network Portal.

7.1 Exception requests to any standards described within this section can be submitted via noc@indiana.edu. Include the nature of the exception request and any related policies defined in this document.

7.2 A general guideline is that all firewall policies should be as restrictive as possible. Recommended guidelines:

  • Use the group "All_IU_Networks" as a source instead of "ANY 0.0.0.0" when services do not need to extend outside of the IU network. This reduces the source scope from roughly 4,300,000,000 addresses down to just the 688,000 IU-owned IP addresses. This will block a lot of external scanning attempts from outside the United States as well. CNI will maintain global address groups for common groups of addresses across the campuses.
  • Avoid using "Any (1-65535)" as a destination port when possible.

7.3 Host-based firewall rules should be used on every host where applicable. Host-based rules should be more restrictive than the Data Center firewall rules when possible.

7.4 Outbound traffic (traffic leaving a firewall zone) is allowed by default. You only need to create exceptions for traffic entering the Data Center from outside of a host's security zone.

7.5 Service port "Any (1-65535)" as a destination port in a firewall policy is allowed when meeting all of the following criteria:

  • Source addresses must all be IU-owned IP space, and must be as specific as possible. For example, global groups such as "All IU Statewide" cannot be used.
  • The destination addresses must be defined as specific /32 IP addresses. This setup cannot be used when the destination is an entire subnet.

7.6 Source address "Any (0.0.0.0)" in a firewall policy is allowed when meeting the following criteria:

  • The destination addresses must be defined as specific /32 IP addresses. This setup cannot be used when the destination is an entire subnet.
  • Destination ports must be specific. "Any (1-65535)" cannot be used.

7.7 Destination address "Any (0.0.0.0)" in a firewall policy is not allowed. The destination should be specific to a subnet or set of IP addresses which have been assigned to your department.

7.8 Entire subnets as destinations are allowed when meeting all of the following criteria (accepted starting January 13, 2014):

  • Source addresses must all be IU-owned IP space. This can be a source subnet as long as it is from IU-owned IP space; it must be as specific as possible. For example, global groups such as "All IU Statewide" should not be used.
  • Destination port cannot be "Any (1-65535)" when attempting to use a subnet as a destination.

7.9 Individual DHCP addresses cannot be used as sources. The dynamic nature of DHCP allows for the potential of unexpected access during future IP allocations. VPN space, global groups, entire subnets, or static addressing can be used instead.

7.10 DNS entries must exist when using specific IPs as destination addresses. DNS entries can be set up by contacting IU DNS Administration.

  • IUPUI Data Center: dns-admin@iupui.edu
  • IUB Data Center: dns-admin@indiana.edu

7.11 Source and destination host groups can be used to group together source and destination objects when meeting the following criteria:

  • Destination host groups must be comprised of members with specific /32 IP addresses that your group owns inside of the Data Center. Destination host groups cannot be selected as a source in a policy. Destination host groups must be specific to a security zone.
  • Source host groups can be both specific /32 IP addresses and entire subnets outside of the Data Center. Source host groups cannot be selected as a destination in a policy. The same source host group may be used within multiple security zones.

7.12 Firewall security zones are used to split up networks within the Data Center.

UITS Core Services UITS Hosted Services IU Community Health Sciences
Servers and systems managed by UITS staff only UITS staff only Any IU staff member Any IU staff member
Operating system root, administrator, or equivalent access
UITS staff only UITS staff only Any IU staff member Any IU staff member
Operating system level interactive logins or virtual desktop sessions
UITS staff only Any IU staff member Any IU staff member Any IU staff member
User-provided code No Yes Yes Yes
Examples
DNS, DHCP, NTP, ADS, CAS, Exchange, Lync, Oracle databases, HRMS, FMS, etc.
Webserve, CHE, CVE
Non-UITS physical servers, Intelligent Infrastructure virtual servers provisioned for departments
Regenstrief, School of Medicine, Speech and Hearing, Optometry, Dentistry, IUB Health Center

Each firewall security zone exists at both the IUPUI and IUB Data Centers. The following describes which zones are related as well as what the relationship means:

  • UITS Core Security Zone
    • Campus members:
      • IN-30-CORE
      • BL-30-CORE
  • UITS Hosted Security Zone
    • Campus members:
      • IN-32-UITS
      • BL-32-UITS
  • IU Community Security Zone
    • Everything else, including physical servers, VM Servers provisioned by SAV that are not UITS-managed/hosted services, and academic departments located within the data centers.
    • Campus members:
      • IN-33-COLO
      • BL-33-COLO

Campus members of the same security zone do not need firewall rules to communicate to each other. For example, a host in IN-30-CORE does not need any firewall rules to communicate to a host in BL-30-CORE and vice versa. Rules are required to communicate to different zones.

Training on these zones can be provided by Campus Networks Engineering (CNE). Contact noc@indiana.edu to request a meeting with CNE regarding firewall security zone training.

7.13 Rules are specific to a single member firewall zone. One rule cannot cover inbound traffic for hosts in both BL-30-CORE and IN-30-CORE. If you had a server in IN-30-CORE and another server in BL-30-CORE that you wanted to allow HTTPS traffic to from the world, it would require two rules.

  • Rule #1 within BL-30-CORE
    • Source: Any
    • Destination: 129.79.1.1/32
    • Port: HTTPS
  • Rule #2 within IN-30-CORE
    • Source: Any
    • Destination: 134.68.1.1/32
    • Port: HTTPS

7.14 Global address groups are built-in groups that any department can use. These are commonly used source groups that are maintained by CNI. "All_IU_Networks" is an example of a global address group. This group contains every IU-owned IP address.

This is document avng in the Knowledge Base.
Last modified on 2024-02-02 15:14:03.