Feature ideas

Please search first before posting to help others find and vote for your idea!
Centralized mapping configuration and control over data sources
Currently, each data source requires specific mapping configurations tied to the exporter / integration deployment phase. This approach is inefficient when managing multiple data sources. We propose implementing a centralized and versioned configuration in Port to improve this process. Proposal: Port will store a list of mapping configurations for each integration type. Each data source will have an option to specify which configuration version to use and whether to stream data into port or not. This will allow sharing configurations between different exporters of same integration. The process will decouple data source management from exporter / integration rollout. Benefits: Reduced configuration duplication Easier updates and maintenance Improved scalability for managing many data sources Consistent configuration across environments This feature aims to streamline data source management, making it more efficient for teams working with multiple data sources. --- ## Additional explanation based on k8s integration Today, each k8s cluster is represented as a separate data source without any control over shared configuration for all clusters, in addition k8s data source can't be created on demand and depends on deployment of k8s exporter. Each configuration change needs to be applied to all k8s data sources explicitly either via API/UI/Terraform or based on config-maps deployed on each cluster. After applying the proposed approach, port builders will be able define and share configuration among multiple k8s clusters and apply experimental changes gradually. How this could work in practice: every k8s exporter is deployed on each k8s cluster when it is supplied with appropriate credentials to access port and publishes only the cluster name. In addition the exporter subscribes to receive new instructions from port (which data to stream). port builders (developers) discover all k8s clusters associated to their account and choose which clusters should be enabled as data sources and which mapping configuration version should be applied. This way k8s exporters are deployed once and upgraded only when there is a new version of exporter. The control over data input is managed centrally from port and configurations can be shared among multiple data sources of same type.
1
Terraform Data Source for fetching existing integrations
TLDR - if Terraform is not creating the port_integration resources, then it should not delete them either ------- As per https://docs.getport.io/guides/all/import-and-manage-integration/ , when setting up a new integration via Terraform, it is necessary to first manually create the integration via the "Builder" UI, and then import this to Terraform. This creates a problem for projects which are subsequently rolled back - e.g. some change has broken a feature in the IDP, potentially unrelated to the new integration. Consider the following sequence: Integration created manually Integration imported into Terraform Changes are applied to the live Port instance (terraform apply) Something in that release is identified as breaking - so we need to roll back to an earlier version Earlier release is deployed - terraform plan & apply is run again, from an earlier version of the code Terraform sees that the integration wasn't present in the earlier version, so it deletes it from both the live portal and the terraform state! Now we're back to square-one, and having to re-create the integration, with a new installation id, all over. Terraform has a convention for resources which aren't directly managed via IaC - Data Sources. I suggest the Port provider should: Offer a "Data Source" to lookup an existing integration, by id / name Update the "port_integration" resource to only manage the config of the integration - not the integration itself
2
Load More