Data Sources
Data Sources (Main Menu
/ Admin
/ Data Sources
) are used by Importers to fetch data from external sources. A Data Source represents a connection e.g. to a database, and can be used by multiple Importers.
Introduction
Txture supports a wide variety of Data Sources. As stated above, the Data Source is used by one or more importers for retrieving data from external sources. Each Data Source is compatible with specific importers. Basically there are two types of Data Sources in Txture: Generic Data Sources and Vendor Specific Data Sources.
Vendor Specific Data Sources can only be used for specific vendors, whereas Generic Data Sources allow to inspect data from all kinds of interfaces accessible via the internet.
Example: The Generic Data Source IP network does not care whether the target machine of the IP address is running at Amazon, Google or on-premise. The queried data depends on the command that the importer will execute on the target machine. This command can be chosen freely by the user. The Vendor Specific Data Source AWS on the other hand can only query a restricted amount of data from machines launched at Amazon. The retrievable data from these machines is fixed by the vendor and can not be influenced by the user.
The Data Source, as the name indicates, is the source the importer connects to for extracting data. The importer contains the logic for mapping the incoming data to Txture. More details can be found at Importers
Generic Data Sources
Text Data Source
In this Data Source, it is possible to manually enter text in different formats such as JSON or CSV in the content field. Specific importers can be selected in the next step of creating an importer for this data source.
Folder Data Source
The Base-URL for the specific folder needs to be specified in the according field. Any folder on the Txture machine can be chosen. Specific importers can be selected in the next step of creating an importer for this data source.
HTTP Data Source
The Base-URL for the specific endpoint needs to be specified as well as the authentication method (optional together with username and password) in the according fields. There also exists the possibility to add custom headers by clicking on the plus-button. This data source is used to connect to an external source via HTTP. Specific importers can be selected in the next step of creating an importer for this data source.
IP Network Data Source
This data source explores the given IP ranges. The user does not need to know the IP address exactly, every IP within the given range is tested for a connection and only the valid ones produce a result. Open ports can be checked via ping or port scan if at all. The IP Network Data Source also supports expanding the IPs via the trace path. The command which will be executed on each of the reachable machines must be specified in the specific importer that uses this Data Source.
LDAP / Active Directory Data Source
An LDAP server address, a distinguished name (DN), a password, and a Base DN need to be specified in the according fields.
Server address
The server address is given as an URL like ldap://example.com:389
or ldaps://example.com
.
Warning:
Note that only ldaps://
is encrypted via SSL/TLS (requires a trusted and valid certificate). If no trusted certificate is installed on the LDAPS server, a CA certificate can be used to establish trust (see ca-certs
section of the operations docs).
The default ldap ports are:
Port | Usage |
---|---|
389 | ldap (not encrypted) |
636 | ldaps (SSL/TLS encrypted) |
3368 | active directory global catalog (not encrypted) |
3269 | active directory global catalog (use for authentication) |
SSH / WinRM Data Source
This data source allows connecting to systems offering SSH (mostly Linux) or configured for WinRM (mostly Windows).
Connections to hosts via SSH use port 22 by default, where connections to Windows hosts via WinRM use port 5985 for HTTP and 5986 for HTTPS by default.
Similar to the IP Network Data Source the SSH and WinRM Data Source require IP ranges that will be explored by the specific importer. The user has the possibility to add several credentials (containing of username and password). During the process of exploring the given IP ranges, all credentials are tested on all IP addresses. Combinations of valid credentails and hosts will be stored for the next importer run as to keep unsuccessful authentication attempts to a minimum. The check for open ports can be done via ping or port scan if at all.
SSH / WinRM Asset Data Source
While in principle very similar to SSH / WinRM Data Sources, the SSH WinRM Asset Data Sources use pre-existing assets in the repository instead of defined IP ranges. A typical workflow in this case could be importing assets from a Hypervisor (VMware, HyperV etc.) and then using SSH / PowerShell commands to extract further information from these hosts.
In order for this data source to properly identify eligible hosts, the property storing the IP address has to be semantically tagged with "IP address".
File Upload-Data source
This data source comes in very handy if files such as a CSV file have to be imported quickly and possibly only once. As the name already gives away it simply allows uploading files and accessing them from importers.
Vendor-Specific Data Sources
AlibabaCloud Data Source
This data source establishes a connection to Alibaba Cloud for extracting data.
For the connection it is necessary to provide the credentials Access Key ID and Secret Key as well as the Region. If a user has distributed the cloud infrastructure to several regions, an Alibaba Data Source must be added for each region.
AWS Data Source
This data source establishes a connection to AWS for extracting data about existing or running cloud service instances. This includes e.g. compute instances like EC2, block storage volumes or databases (RDS).
For the connection it is necessary to provide the credentials of an IAM user with programmatic access, which consists of Access Key ID and Secret Key. Further you have to declare Region of AWS cloud which the data source will establish a connection to. If a user has distributed the cloud infrastructure to several regions, it is possible to name all relevant regions within the same Data Source.
Since connectivity is established via AWS APIs and is authenticated via tokens you need to make sure that AWS Security Token Service (STS) actions are available and permitted in your corresponding IAM policy. Besides Get, List or Describe actions to access actual cloud service information, also add the following STS actions to the IAM policy that is attached to the accessing AWS user:
{
"Version": "2012-10-17",
"Statement": [
{
...,
"Effect": ...,
"Action": [
...,
"sts:GetAccessKeyInfo",
"sts:GetCallerIdentity",
"sts:GetSessionToken",
...
],
"Resource": ...,
...
}
],
...
}
AWS S3 Data Source
This data source is not intended to import assets present in AWS, but rather assets from data files stored in AWS S3. This requires, in addition to the configuration of the AWS Data Source, to set a single region and in addition the name of the bucket is required. For extracting data from different buckets, a separate AWS S3 Data Source is required for each bucket. The target files within the bucket are expected to have either a JSON or CSV content. The path to the target file will be specified in the importer. Specific importers for JSON and CSV can be selected in the next step of creating an importer for this Data Source.
Google Cloud Data Source
This data source establishes a connection to Google Cloud for extracting assets and properties.
The configuration of this data source requires you to set up a service account with the appropriate permissions for the assets you want to import (e.g. Compute Viewer in order to import compute engine instances). Export the service account key in JSON format and paste the content in the access data field of the data source configuration. Additionally, you need to provide the project ID to import assets from.
Google Cloud Storage Data Source
Similar to the AWS S3 Data Source, this data source allows to read CSV or JSON files stored in Google Cloud Storage buckets. In addition to the configuration needed for the Google Cloud Data Source, it is required to provide a bucket name. All files stored in this bucket can then be accessed from CSV or JSON importers.
Microsoft Azure Data Source
This data source establishes a connection to Microsoft Azure for extracting assets such as virtual machines or load balancers.
The Data Source requires an App registration to be configured. From this application, the Data Source needs the following parameters: the Client App ID, the Tenant (tenant id of the application), the Password (key of the application), the Subscription ID and the Azure Environment (region). The official documentation of Azure details how to build an according application with step by step instructions. If the cloud infrastructure is distributed over several regions, a Microsoft Azure Data Source must be added for each region.
Kubernetes Data Source
This data source establishes a connection to Kubernetes for extracting data. Due to authentication differences, the data source has to distinguish between Amazon Elastic Kubernetes Service (AWS EKS) and other Kubernetes instances.
For the connection it is necessary to provide a Master URL together with Username and Password. In addition the content of the CA Certificate is required for conneting to the Data Source.
OpenStack Data Source
This data source establishes a connection to OpenStack for extracting data.
For the connection it is necessary to provide the Endpoint as URL together with Username and Password. Additionally the Project name is required. Optional fields are the Domain name and the Project domain name. These values can explicitly be set in OpenStack. If that is not the case, the default values are set to "Default" which this Data Source assumes if the fields are kept empty.
Info:
The API of OpenStack which is used for this Data Source can only be reached using the default-admin, this behavior can not be influenced by Txture!
Oracle Data Source
This data source establishes a connection to Oracle for extracting data.
For the connection this Data Source needs the following input: User OCID, Private Key, Fingerprint, Tenancy OCID, Compartment ID, Passphrase and the Region. Oracle offers a help page regarding the necessary credentials for an external connection. If a user has distributed the cloud infrastructure to several regions, an Oracle Data Source must be added for each region.
HyperV Data Source
This data source establishes a connection to a HyperV server for extracting data.
For the connection this Data Source needs the Host Address* together with _Username* and Password. The connection is internally established via WinRM. If the checkbox for HTTPS is enabled, port 5986 will be used, otherwise port 5985. The Auth scheme can be selected from a drop-down menu.
Jira Software
Txture allows to natively connect to a Jira instance (both on-prem or in the cloud). One use case for this is tracking progress during a migration by annotating an issue ID on the assets in Txture (or, alternatively, by tagging the Jira issue with a Txture ID). However, given the versatility of both Jira and Txture a range of use cases apply.
The integration is rather straightforward and requires entering an instance URL along with a username and either a password or an API token. Additionally, the data source allows to define a range of custom HTTP headers that will be sent to Jira along with all requests. This is particularly helpful if you operate Jira behind a proxy such as an API gateway.
FAQ
Can I use the same Data Source for multiple importers?
Yes, it is possible to use the same Data Source for multiple importers. When creating an importer, the user can pick from a list of saved Data Sources. A Data Source can only be deleted if it is not in use by any importer.
Can I change a Data Source if it is in use by one or more importers?
Yes, the Data Source can be updated without restrictions. The user should be very careful with changes regarding the endpoint or similar, because the importer synchronizes the data in the repository.
Example: If the Data Source with only one endpoint as input is changed such that another endpoint will be requested, the importer will delete the previously imported data and only save the now available data. If the intention was to request several endpoints, a separate Data Source for each endpoint must be created.