All Topics

Top

All Topics

Hello, I'm having a hard time trying to find what data source events from a search are originating from, the Search is: source="/var/www/html/PIM/var/log/webservices/*" I've looked thru the "Files... See more...
Hello, I'm having a hard time trying to find what data source events from a search are originating from, the Search is: source="/var/www/html/PIM/var/log/webservices/*" I've looked thru the "Files % Directories" (Which I thought I would find it in there) and the rest of the Data Inputs, but can't seem to locate it anywhere. A side question   I tried creating a new Files % Directories Data Input by putting the full Linux path like below: //HostName/var/www/html/PIM/var/log/webservices/* But It says Path can't be empty.  I'm sure this is probably not how you format a Linux path, just couldn't find what I'm doing wrong. Thanks for any help at all, Newb      
Hi, My enterprise is using Mothership 2.0 and recently, mothership seemed to continue its collection of data, but a few are not uploading to their respective indexes and we are having trouble gettin... See more...
Hi, My enterprise is using Mothership 2.0 and recently, mothership seemed to continue its collection of data, but a few are not uploading to their respective indexes and we are having trouble getting it to work.
Some years ago I've created a (beautiful!) dashboard, with multiple panels, which presented related data at different angles. Some upgrades of the Splunk-server later (currently using Splunk Enterpri... See more...
Some years ago I've created a (beautiful!) dashboard, with multiple panels, which presented related data at different angles. Some upgrades of the Splunk-server later (currently using Splunk Enterprise 9.1.5), all of the panels -- except for the one, that shows the raw results of the base search -- stopped working... The common base-search is defined as:   <form version="1.1" theme="dark"> <label>Curve Calibration Problems</label> <search id="common"> <query>index=$mnemonic$ AND sourcetype="FOO" ... | eval Curve=replace(Description, ".* curve ([^\(]+) \(.*", "\1") </query> <earliest>$range.earliest$</earliest> <latest>$range.latest$</latest> </search>    And then the panels add to it like this, for one example:   <panel> <title>Graph of count of errors for $mnemonic$</title> <chart> <search base="common"> <query>top limit=50 Curve</query> </search> ...   Note, how the base search's ID is "common", which is exactly the value referred to as base. Again, the base search itself works correctly. But, when I attempt to edit the panel now, the search-expression is shown only as just that query, that used to be added to the base: If I click on the "Run Search" link in the above window, I see, that, indeed, only that expression is searched for, predictably yielding no results. It seems like something has changed in Splunk, how do I restore this dashboard to working order?
Hello Smarties... Can someone offer some assistance; We recently started ingesting Salesforce into Splunk, Username are coming in as ID's (00000149345543qba), instead of Jane Doe. So was told to us... See more...
Hello Smarties... Can someone offer some assistance; We recently started ingesting Salesforce into Splunk, Username are coming in as ID's (00000149345543qba), instead of Jane Doe. So was told to use the Join to get the Usernames or Names, and add to the sourcetype I need "joined" with;  So I am trying to get the "Login As"  events which is under the sourcetype="sfdc:setupaudittrail" - how do I get the Login As events with usernames, if usernames are under the user index and the login as events are under the setupaudittrail sourcetype? Here is my attempted search which doesn't come up with anything; But I know the events exist...   index=salesforce sourcetype="sfdc:user" | join type=outer UserAccountId [search index=salesforce sourcetype="sfdc:setupaudittrail" Action=suOrgAdminLogin]
We can’t guarantee the health of our services or a great user experience without data from our applications. Is CPU usage high? Are there an increased number of requests? Do we have too many Kubernet... See more...
We can’t guarantee the health of our services or a great user experience without data from our applications. Is CPU usage high? Are there an increased number of requests? Do we have too many Kubernetes nodes stuck in a NotReady state? These metrics and many others impact our services and our customers, but if we can’t see them, we can’t fix them. So, we build out an observability practice, instrument our services, collect all the metrics, export metrics to an observability backend, and quickly realize that our systems produce an overwhelming amount of data. We store metrics to understand trends, but data storage costs money so collecting and storing everything quickly becomes a budgetary problem. In the noise of all that data, it’s also difficult to identify which metrics are relevant in determining what is actually negatively impacting our services and users.  In a world where setting up a functional observability practice is as easy as installing the OpenTelemetry Collector configured with auto-instrumentation, there are different ways to manage metric pipelines so that metric collection is sustainable and cost-effective and provides real value to the reliability of our applications. In this post, we’ll look at how OpenTelemetry processors specifically can help manage data from within metric pipelines to avoid exporting and storing unhelpful data so you can focus on service reliability, faster troubleshooting, and lower observability costs. Managing Metric Data Volume Best Practices The following best practices help eliminate metric noise, reduce metric collection volume, and ensure helpful metrics are available and ready to support troubleshooting efforts.  Use OpenTelemetry Semantic conventions for metric names and attributes Collect metrics intentionally  Monitor the pipeline itself Optimize the pipeline and exporting processes Let’s take a look at each of these. OpenTelemetry Semantic Conventions Using OpenTelemetry metrics semantic conventions when naming metrics or metric attributes helps with data analysis and troubleshooting. Defining clear metric names and attributes also helps identify redundancies or commonalities between metrics. Without the use of semantic conventions, different engineering teams might use different names for the same metric, leading to metric redundancy and increased data volume. For example, metrics around total HTTP requests could be named: http_requests_total, total_http_requests, http_request_count, etc. With semantic conventions in place, these individual metrics can be consolidated into one single, shared metric like http.server.requests, which captures aggregated total requests and attributes like request method and endpoint. When metrics follow naming conventions, aggregations, filters, and transformations can more easily be applied to reduce the volume of metric data, reduce the cost of backend platform storage, and improve the effectiveness of observability practices.  Collect and Store Metrics Intentionally With semantically named metrics, it’s easier to identify those that provide value and rename or remove the ones that don’t, but how do you determine which metrics are and are not helpful? Here are some questions to consider: Could the metric be used for an actionable, high-priority alert?  Would the data reported by the metric create a meaningful dashboard? Is the individual metric meaningful? Or would an aggregation be more impactful?  It’s also important to note that not all metrics need or should be exported to backend observability platforms – not all data is relevant for troubleshooting or development purposes and doesn’t need to be readily available within observability backends. Cold data that won’t actively be used, like metrics necessary for compliance or audit purposes, can be exported to backend storage like Amazon S3 (perhaps even Glacier). This can lower storage costs and keep observability backends clear of metrics that aren’t immediately helpful in monitoring the resiliency and performance of applications.   Monitor the pipeline Monitoring the pipeline itself (e.g. Collector performance and resource limits) can help identify delays and/or constraints in processing or exporting. This ensures data integrity and quick insight into any issues with metric collection. It also provides insight into the performance and effectiveness of metric collection so you can iterate on which metrics you’re collecting and how you’re collecting them. Optimize the pipeline and exporting process Optimizing the pipeline collection and exporting processes ensures efficient data flow from collection to the backend platform so you can prevent bottlenecks and delays and successfully use the metrics you collect for performance monitoring and troubleshooting. OpenTelemetry Processors So how do you put these best practices into… practice? The OpenTelemetry Collector provides several processors that can be configured to transform data before it’s sent to observability platform backends. We can think of these processors more as pre-processors, taking many points of data and interpreting or condensing them into more meaningful information. Processors offer more control over metric collection so data can be reported in useful ways that reduce metric noise and storage costs.  Filter Processor Metric data can be included or excluded through configuration of the filter processor in the OpenTelemetry Collector configuration file. Any low-priority or unhelpful metrics, like those with invalid types or specified values, can be filtered out. Here’s an example that shows how to drop an HTTP healthcheck metric:  This metric doesn’t provide meaningful or actionable data around application performance or reliability, so to reduce metric volume and storage costs, we can drop it before exporting it to our backend observability platform. Attribute and Metric Transform Processors The OpenTelemetry attribute and metrics transform processors can be configured to modify and/or consolidate metrics. Their functionalities overlap a bit – you can add attributes or update attribute values using either processor. The metric transform processor provides more room for data manipulation, and the docs recommend that if you’re already using the metrics transform processor functionality, there’s no need to switch over to the attribute processor.  The metric transform processor can be used for renaming metrics so you can modify metric names or attributes while sticking to semantic conventions to reduce the number of discrete but related metrics, which are often billed separately in observability backends. For example, when using multiple cloud providers like Amazon Web Services (AWS) or Google Cloud Platform (GCP), each provider reports CPU utilization data under different metric names. Instead of reporting these metrics separately, they can be combined into a single metric name following semantic conventions to reduce cardinality and improve metric management. Here’s an example of updating AWS and GCP CPU utilization metrics to report under a single cloud.vm.cpu.utilization metric with attributes indicating the cloud provider and cloud service:  Group by Attributes Processor To organize metric data and more easily apply aggregations and transformations to specific groups of metrics, use the group by attributes processor. For example, if you’re collecting data from multiple services each running on multiple instances, the group by attribute processor can be configured to group metrics by service name as follows: Aggregations can be applied to sum or average metrics for each instance of each service. If service_a is running on instance_1, instance_2, and instance_3, we can use the group by attributes processor to combine these individual instance metrics into one single aggregated service_a metric. This reduces cardinality and data volume, while also making the metric data easier to troubleshoot.  Batch Processor While the batch processor doesn’t manipulate the raw metric data itself, it does contribute to an effective metrics pipeline by improving export performance. Effective exporting of our data means it gets to where it needs to go and is readily available for use when we need it. Batching metrics to compress data reduces the number of outgoing connections in order to improve exporting performance. It can easily be configured within the Collector configuration file by specifying batch under the processors block:  Or additional configuration options like batch size and timeout can be specific for more fine-grained control:  Memory Limiter Processor Like the batch processor, the memory limiter processor is related to the overall functionality of the metric pipeline. Using this processor ensures metric collection functions properly and data is collected, processed, and exported successfully. The memory limiter processor performs periodic checks of memory usage to prevent the Collector from running out of memory. If the Collector hits memory limits, it will start refusing data and lead to metric data loss. Here’s how to configure the memory limiter with a couple of the available options within the Collector configuration file: Wrap Up Observability isn’t about collecting all of the data. Collecting all the data is counterproductive to maintaining reliable systems that successfully support our customers. Instead, observability is about surfacing and analyzing actionable data. Managing metrics data volume at the point of collection with OpenTelemetry processors can help reduce the noise making it easier to detect anomalies and resolve issues faster. OpenTelemetry, a native part of Splunk Observability Cloud, provides built-in metric pipeline management and one unified observability backend platform for profiles, metrics, traces, and logs – no third-party pipeline management tools required. With Splunk Observability Cloud, you can also manage some metrics after ingestion with Metrics Pipeline Management (MPM). Interested in reducing metric volume, storage costs, and troubleshooting efficiency? Start a Splunk Observability Cloud 14-day free trial and adopt OpenTelemetry to tame your metric pipeline and to experience the benefits of one centrally-located backend observability platform (aka Splunk Observability Cloud). Using the power of Splunk Observability, watch your metric data flow in and your data storage costs go down (all while optimizing troubleshooting and reducing time to resolve incidents thanks to helpful and well-managed metric data).  Resources OpenTelemetry Metrics Transforming telemetry OpenTelemetry Collector Configuration Data Storage Costs Keeping You Up at Night? Meet Archived Metrics
Hello Everyone, Having a hard time finding the appropriate way to display data. I have duplicate data where one field is unique. I would like to dedup but leaving one instance of the unique value.  ... See more...
Hello Everyone, Having a hard time finding the appropriate way to display data. I have duplicate data where one field is unique. I would like to dedup but leaving one instance of the unique value.  Example of what I want to dedup: field1 field2 field3 field4 a b c d a b c e a b c f   Example of what I would like to see: field1 field2 field3 field4 a b c d   Any help would be greatly appreciated. Regards.
Hi guys,   Looking to up my still on this and wondering what do you guys suggest around this and what the best training certification would be on this.  But I think you need to be on the cloud for... See more...
Hi guys,   Looking to up my still on this and wondering what do you guys suggest around this and what the best training certification would be on this.  But I think you need to be on the cloud for this even if you have the ES? I know i cant use mission control either as we are not on the cloud. What would you recommend other than mission control?  Thanks Ahmed     
Hello good afternoon, does anyone know how to integrate the Adabas database into Splunk and where I can download the jdbc drivers for Splunk DB Connect?
Hello, I am using Splunk Enterprise. From MS site I registred app with premission to Security Alert and Security Incidents, also added account in Splunk. Imposible to add input there is error below:... See more...
Hello, I am using Splunk Enterprise. From MS site I registred app with premission to Security Alert and Security Incidents, also added account in Splunk. Imposible to add input there is error below:    
Dear all,  Requesting your support in achieving the below.  I have a method which has a custom object as a parameter. to run getter chain. I first need to cast the parameter into a Object Array and... See more...
Dear all,  Requesting your support in achieving the below.  I have a method which has a custom object as a parameter. to run getter chain. I first need to cast the parameter into a Object Array and then Cast it into my POJO Class and then run the getter on it. How this can be Achieved.  My Code Snippet Below.  ClasssName: com.mj.common.mjServiceExecute Method: execute(com.mj.mjapi.mjmessage) public abstract class mjServiceExecute implements mjServiceExecuteintf { public mjmessage execute(mjmessage paramMjMessage) { mjmessage mjmjmesg = null; try { Object[] arrayOfObject = (Object[])paramMjMessage.getPayload(); MjHeaderVO mjhdrvo = (MjHeaderVO)arrayOfObject[0]; String str1 = mjhdrvo.getName(); } }  I want to extract the value of str1 to split the business transaction. Requesting your assistance.
I'm working on an environment with a mature clustered Splunk instance. The client wishes to start dual-forwarding to a new replacement environment which is a separate legal entity (they understand th... See more...
I'm working on an environment with a mature clustered Splunk instance. The client wishes to start dual-forwarding to a new replacement environment which is a separate legal entity (they understand the imperfections of dual-forwarding and possible data loss etc.) They need to rename the destination indexes in the new environment dropping a prefix we can call 'ABC', I believe the easiest way is to approach this via INGEST_EVAL on the new Indexes. There are approx 20x indexes to rename example: ABC_linux ABC_cisco     transforms.conf (located on the NEW Indexers) [index_remap_A] INGEST_EVAL = index="value"     I have read the spec file in transforms.conf for 9.3.1 and a 2020 .conf presentation but I am unable to find great examples. Has anyone taken this approach? as it is only a low volume of remaps it may be best to statically approach this.
Hello splunkers, I'm working with the latest version of Splunk Add-on Builder to index data from a REST API. TA only pulls the first page of results by calling:   https://mywebpage.com/api/source... See more...
Hello splunkers, I'm working with the latest version of Splunk Add-on Builder to index data from a REST API. TA only pulls the first page of results by calling:   https://mywebpage.com/api/source/v2   At the bottom of the pulled data are URL for the next url:   "next_url" : "/api/source/v2?last=5431"   How do I configure TA for iterates through all the pages? I checked from link below, but i dont' understand how (or if is possible) pass the variable  from modular input to my endpoint like this or in other way:   https://mywebpage.com/api/source/v2?last=${next_url}   https://docs.splunk.com/Documentation/AddonBuilder/4.3.0/UserGuide/ConfigureDataCollection#Pass_values_from_data_input_parameters  Any ideas? Thanks!
I am looking to replace a sourcetype using props.conf / transforms.conf so far with no luck. props.conf [original_sourcetype] NO_BINARY_CHECK = 1 SHOULD_LINEMERGE = false TIME_PREFIX = oldtimepref... See more...
I am looking to replace a sourcetype using props.conf / transforms.conf so far with no luck. props.conf [original_sourcetype] NO_BINARY_CHECK = 1 SHOULD_LINEMERGE = false TIME_PREFIX = oldtimeprefix TIME_FORMAT=oldtimeformat pulldown_type = 1 TRANSFORMS-set_new=set_new_sourcetype [new_sourcetype_with_new_timeformat] NO_BINARY_CHECK=1 SHOULD_LINEMERGE=false TIME_PREFIX=newtimeprefix TIME_FORMAT=newtimeformat pulldown_type = 1 #rename=original_sourcetype transforms.conf [set_new_sourcetype] SOURCE_KEY = MetaData:Source REGEX = ^source::var/log/path/tofile.log FORMAT = sourcetype::new_sourcetype_with_new_timeformat DEST_KEY = MetaData:Sourcetype tried different REGEX's, including  REGEX = var/log/path/tofile.log   Also tried setting it like this in props.conf [source::var/log/path/tofile.log] TRANSFORMS-set_new=set_new_sourcetype   I am also looking at inputs.conf, which has monitoring stanzas for all syslog traffic, perhaps some blacklisting/ whitelisting based on source can be done there. But I am curious as to what is not working with my props/transforms. Thanks      
In my environment, palo alto (proxy) logs are being stored into Splunk. I want to know what kind of operation on a server make high-risk communication to internet using palo alto logs and Windows ev... See more...
In my environment, palo alto (proxy) logs are being stored into Splunk. I want to know what kind of operation on a server make high-risk communication to internet using palo alto logs and Windows event logs or Linux audit  log or some thing. Is it possible with Correlation Search of Splunk ?
I configured a search head cluster and configured a captain and added the searchheads to the indexer cluster. I now want to break the shcluster and have done this so far; All from the cli: removed... See more...
I configured a search head cluster and configured a captain and added the searchheads to the indexer cluster. I now want to break the shcluster and have done this so far; All from the cli: removed the member that was not the captain, went ok Tried to remove the other member, didnt work the command just hanged for half an hour before I gave up and aborted it. Tried to set the captain in static mode, did a clean raft, but still no luck. configured disabled=1 in the shclustering part of the server.conf and this time it went ok I guess I now get the message this node is not a part of any cluster configuration.   Over to the indexer cluster where I now want to get rid of the searchheads from the GUI which is still showing up as up and running. ran the command splunk remove cluster-search-heads and that went successful but the searchheads are still there in the indexer clustering GUI some suggests that this will go away after a few minutes and after a restart of the manager node this will certainly go away. I have now waited a whole day and restarted, but they are still showing up and running with a green checkmark too. Where does it get its information from and how can I get rid of them?
Hello, Example I have 2 lookups, first.csv and second.csv first.csv have 1 column name=fruit_name and with multivalue first.csv fruit_name apple banana melon mango grapes guyab... See more...
Hello, Example I have 2 lookups, first.csv and second.csv first.csv have 1 column name=fruit_name and with multivalue first.csv fruit_name apple banana melon mango grapes guyabano coconut second.csv have 2 column fruits and remarks with multivalue under fruits column fruits remarks apple mango guyabano visible How can i check if all the values of second.csv (apple,mango,guyabano) are present in the column fruit_name under first.csv then echo out the remarks with the value of visible Thanks in advance
Hi Team, Due to SSL cert issue I see the Database queries tab is not loading which we are working on it. Customer is asking to fetch the following data => Query, time executed, time took for complet... See more...
Hi Team, Due to SSL cert issue I see the Database queries tab is not loading which we are working on it. Customer is asking to fetch the following data => Query, time executed, time took for completion etc.  Is there any way we can get the data from the database? Queries data is located in which database also the path to DB? Please can share the DB and table name to so we can export the data from database. Thanks
Hello Splunkers!! In a scheduled search within Splunk, we have set up email notifications with designated recipients. However, there is an intermittent issue where sometime recipients do not consis... See more...
Hello Splunkers!! In a scheduled search within Splunk, we have set up email notifications with designated recipients. However, there is an intermittent issue where sometime recipients do not consistently receive the scheduled search email. To address this, we need to determine if there is a way within Splunk to verify whether the recipients successfully received the email notifications. Please help me identify how address and how to check this things in Splunk.   index=_internal source=*splunkd.log sendemail I have tried above search but above search is not providing the information about receipents email address. 
Hi, i got error after completed set up Enterprise Security on my lab. First im using Windows but when want to setup Enterprise Security always got    Error in 'essinstall' command: (InstallExcepti... See more...
Hi, i got error after completed set up Enterprise Security on my lab. First im using Windows but when want to setup Enterprise Security always got    Error in 'essinstall' command: (InstallException) "install_apps" stage failed - Splunkd daemon is not responding: ('Error connecting to /services/admin/localapps: The read operation timed out',)   then i want to try install fresh Splunk Enterprise in WSL (in my case Ubuntu 22) i got success install and can doing anything normally. After that, i try install Enterprise Security again. And now i got successful notification when setup Enterprise Security via WebGUI, but unfortunately when successful restart i can't open Splunk Enterprise    This is my CLI looks like    i cannot see any error in my CLI that's why i ask it here, maybe somebody can help me ?      
Mvmap has different results on different versions left screen is 9.3.1 version right is 9.0.5  if field will have more then one value result will be equal