All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

OK. Two things. 1. @isoutamo it doesn't matter how many times you have the same setting specified. Only the "last one" is effective. So on its own just specifying INDEXED_EXTRACTIONS twice doesn't d... See more...
OK. Two things. 1. @isoutamo it doesn't matter how many times you have the same setting specified. Only the "last one" is effective. So on its own just specifying INDEXED_EXTRACTIONS twice doesn't do anything. Of course which setting is the "last one" depends on the settings priority. 2. @dbray_sd It makes sense. If you simply run btool on a SH without providing app context you will get effective settings flattened using "normal" setting precedence - not taking into account those context (as if all settings were specified in a global system context). For quite a few versions now btool has supported an --app=something argument so you can evaluate your settings in an app context. But it will - as far as I remember - still not check user's settings and I'm not 100% sure if it will properly check permissions. So yes, your solution makes sense. If you haven't explicitly exported your app's contents they'll only be usable in that app's context.
That's to be expected as well. If your input is not broken into single events properly you might end up with a small number of huge data blobs (effectively consisting of several "atomic" events). Sin... See more...
That's to be expected as well. If your input is not broken into single events properly you might end up with a small number of huge data blobs (effectively consisting of several "atomic" events). Since they'd get cut off at TRUNCATE point, all the data following that point would be lost.
Thank you for your input. Might be the line breaker field that is causing this.  In addition, the amount of events received is low taking into consideration it's an Azure Firewall with 10-15 GB Dail... See more...
Thank you for your input. Might be the line breaker field that is causing this.  In addition, the amount of events received is low taking into consideration it's an Azure Firewall with 10-15 GB Daily of logs.
  | eval newField=mvjoin(host, ",")  
Well, right as I go to create a ticket, I stumbled onto an old note that had the fix. I had to do the following on the SH: sudo -u splunk mkdir -p /opt/splunk/etc/apps/RabbitMQ_Settings/metadata su... See more...
Well, right as I go to create a ticket, I stumbled onto an old note that had the fix. I had to do the following on the SH: sudo -u splunk mkdir -p /opt/splunk/etc/apps/RabbitMQ_Settings/metadata sudo -u splunk vim /opt/splunk/etc/apps/RabbitMQ_Settings/metadata/local.meta [] access = read : [ * ], write : [ admin ] export = system   I recalled something about permissions or some other weirdo Splunk requirement. Aggravating, but at least it's working as expected now.
Hi @Sailesh6891 , you could try somethng like this: <your_search> | stats values(host) AS host BY index | nomv host Ciao. Giuseppe
The indexer and the new CM will have logs to help indicate what is happening, something to point you in the right direction.  Please look there and post anything of interest if you still need help af... See more...
The indexer and the new CM will have logs to help indicate what is happening, something to point you in the right direction.  Please look there and post anything of interest if you still need help after reviewing. As for the webURL on the indexer, IMO from a security stance should always be disabled.  Your environment so hopefully there is a good reason for that. Knowing that the webURL availability turns on and off does tell me that your old CM has a custom app that enables webURL, the new CM likely does not so when the new CM pushes a bundle the indexer removes the oldCM custom app and disables the webURL.
How can we concatenate values from one field and put it in a new variable with commas. e.g If I run a search , I get number of host in host field. I want to concatenate them all in one field separat... See more...
How can we concatenate values from one field and put it in a new variable with commas. e.g If I run a search , I get number of host in host field. I want to concatenate them all in one field separated by commas.
in props.conf [tenable:sc:vuln] TRANSFORMS-Removetenable_remove_logs = tenable_remove_logs transforms.conf [tenable_remove_logs] SOURCE_KEY = _raw REGEX = ABCSCAN DEST_KEY = queue FORMAT =... See more...
in props.conf [tenable:sc:vuln] TRANSFORMS-Removetenable_remove_logs = tenable_remove_logs transforms.conf [tenable_remove_logs] SOURCE_KEY = _raw REGEX = ABCSCAN DEST_KEY = queue FORMAT = nullQueue     Do you have any other TRANSFORMS-<class> or REPORTS-<class> statements in this props?  The order of processing could be creating issues.  I'm throwing hail marys since I'm at a loss.
Hi,   Our Linux machine has reached the End of Support, so we are moving the Cluster Master from one machine to another. I set up the cluster master in the new hardware and it was working well, bu... See more...
Hi,   Our Linux machine has reached the End of Support, so we are moving the Cluster Master from one machine to another. I set up the cluster master in the new hardware and it was working well, but when I changed the master node URL in the indexer it was not working. The indexer doesn't turn on by itself and even when I turn it on manually, the indexer stays running for some time but during that time the web UI of the indexer does not work. In some time the indexer stops automatically. The same happened for another indexer as well. When I revert to the old cluster master, all the issues are sorted automatically. Splunk indexer always keeps running, web UI is available. No issues are noticed. Any idea why the indexer keeps shutting down? I am Splunk version 9.0.4   Regards, Pravin
Thank you for the suggestion.  I could not test it, as an alternative approach has been adopted in the meantime.
By accident I ran into the same problem.  The brute force solution looks like:  | eval my_time1=strptime(genZeit, "%Y-%m-%dT%H:%M:%S%:z") | eval my_time2=strptime(genZeit, "%Y-%m-%dT%H:%M:%S.... See more...
By accident I ran into the same problem.  The brute force solution looks like:  | eval my_time1=strptime(genZeit, "%Y-%m-%dT%H:%M:%S%:z") | eval my_time2=strptime(genZeit, "%Y-%m-%dT%H:%M:%S.%3N%:z") | eval my_time=if(isnotnull(my_time1),my_time1,my_time2) Try to convert the time with both of the possible time formats (be careful: my example will not reflect the time format of the original question), take the result which is not null.  
When I edit a correlation search, I want to configure the time of the drill-down search.  If I put "1h" in the form "Earliest Offset", it inputs the unix time stamp in milliseconds. Splunk expects... See more...
When I edit a correlation search, I want to configure the time of the drill-down search.  If I put "1h" in the form "Earliest Offset", it inputs the unix time stamp in milliseconds. Splunk expects the unix time stamp in seconds. Is there a workaround for this issue? ->  Correct would be:    
i did it on each search head separately and it worked! too bad there's no way to do it from the master and deploy it them but atleast it works. thanks for the help!
Hello @bishida, I have gone through the repo for the statsdreceiver but I was not able to configure it successfully. receivers: statsd: statsd/2: endpoint: "localhost:8127" aggregation_interval:... See more...
Hello @bishida, I have gone through the repo for the statsdreceiver but I was not able to configure it successfully. receivers: statsd: statsd/2: endpoint: "localhost:8127" aggregation_interval: 70s enable_metric_type: true is_monotonic_counter: false timer_histogram_mapping: - statsd_type: "histogram" observer_type: "gauge" - statsd_type: "timing" observer_type: "histogram" histogram: max_size: 100 - statsd_type: "distribution" observer_type: "summary" summary: percentiles: [0, 10, 50, 90, 95, 100] I tried to configure above but it was not working, here I am not sure how Splunk Oberverability Cloud will know to listen to 8127 post. Let me explain my use case in detail: I have couple of EC2 Linux instance on which statsd server is running and it generating some custom gRPC metrics from a golang application running on port UDP:8125 (statsd). Now, I want these custom gRPC metrics from a golang application running on port UDP:8125 (statsd) to send to Splunk Oberverability Cloud, so that I can monitor these custom gRPC metrics there, but this we need to make a connection between EC2 Linux instance and Splunk Oberverability Cloud, Splunk Oberverability Cloud should able to recieve these custom gRPC metrics as we don't have any hostname/IP address for Splunk Oberverability Cloud we have to use some agent for doing this, I think we can using "splunk-otel-collector.service" Currently I am able to capture the predefined metrices such "^aws.ec2.cpu.utilization", system.filesystem.usage etc on my Splunk Oberverability Cloud but now I also want the custom gRPC metrics same like this. Before this setup I using a setup in which I was having multiple EC2 Linux instance on which statsd server was running and I was a serepate Spunk Enterprise EC2 instance and it was collecting all the metrics there. But Spunk Enterprise provide commands to connect instances to Spunk Enterprise using "./splunk enable listen 9997" and "./splunk add <destination_hostname>:9997" and I was using below configuration to do so. "statsd": { "statsd_max_packetsize": 1400, "statsd_server" : "destination_hostname", "statsd_port" : "8125" }, Same thing I want to achieve using Splunk Oberverability Cloud. Can you please explain in detail how we can connect EC2 instances with Splunk Oberverability Cloud to send custom gRPC metrics from a golang application running on port UDP:8125 (statsd), if using https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/statsdreceiver is the only way then what changes I want to make in the configuration files related to the custom metric collection (has to be added any where in this directory), hostname, ports name mentioning in any files etc in details. Thanks
Hello, I have an Splunk Connect for Syslog (SC4S) server that retrieves logs from a source and transmits them to Splunk indexers. But in order to reduce the number of events, I want to filter the l... See more...
Hello, I have an Splunk Connect for Syslog (SC4S) server that retrieves logs from a source and transmits them to Splunk indexers. But in order to reduce the number of events, I want to filter the logs at the sc4s level. Note that the sc4s tool uses syslog-ng for filtering and parsing. The use case is as follows: when an event arrives on the sc4s server and contains an ip address of 10.9.40.245, the event is dropped. Does anyone have any idea how to create this filter on SC4S? Thank you.
Well, unfortunately, as I stated above "neither fixes the issue." It doesn't matter how I configure the UF props.conf or the SH props.conf, Splunk is refusing to parse JSON properly. Even though othe... See more...
Well, unfortunately, as I stated above "neither fixes the issue." It doesn't matter how I configure the UF props.conf or the SH props.conf, Splunk is refusing to parse JSON properly. Even though other JSON datafeeds work just fine. Guess I'll have to open another ticket with Splunk.
Changed everything as mentioned before, but no data collected into O11y Cloud.
Hello @bishida, I have gone through the repo for the statsdreceiver but I was not able to configure it successfully. receivers: statsd: statsd/2: endpoint: "localhost:8127" aggregation_interval:... See more...
Hello @bishida, I have gone through the repo for the statsdreceiver but I was not able to configure it successfully. receivers: statsd: statsd/2: endpoint: "localhost:8127" aggregation_interval: 70s enable_metric_type: true is_monotonic_counter: false timer_histogram_mapping: - statsd_type: "histogram" observer_type: "gauge" - statsd_type: "timing" observer_type: "histogram" histogram: max_size: 100 - statsd_type: "distribution" observer_type: "summary" summary: percentiles: [0, 10, 50, 90, 95, 100] I tried to configure above but it was not working, here I am not sure how Splunk Oberverability Cloud will know to listen to 8127 post. Let me explain my use case in detail: I have couple of EC2 Linux instance on which statsd server is running and it generating some custom gRPC metrics from a golang application running on port UDP:8125 (statsd). Now, I want these custom gRPC metrics from a golang application running on port UDP:8125 (statsd) to send to Splunk Oberverability Cloud, so that I can monitor these custom gRPC metrics there, but this we need to make a connection between EC2 Linux instance and Splunk Oberverability Cloud, Splunk Oberverability Cloud should able to recieve these custom gRPC metrics as we don't have any hostname/IP address for Splunk Oberverability Cloud we have to use some agent for doing this, I think we can using "splunk-otel-collector.service" Currently I am able to capture the predefined metrices such "^aws.ec2.cpu.utilization", system.filesystem.usage etc on my Splunk Oberverability Cloud but now I also want the custom gRPC metrics same like this. Before this setup I using a setup in which I was having multiple EC2 Linux instance on which statsd server was running and I was a serepate Spunk Enterprise EC2 instance and it was collecting all the metrics there. But Spunk Enterprise provide commands to connect instances to Spunk Enterprise using "./splunk enable listen 9997" and "./splunk add <destination_hostname>:9997" and I was using below configuration to do so. "statsd": { "statsd_max_packetsize": 1400, "statsd_server" : "destination_hostname", "statsd_port" : "8125" }, Same thing I want to achieve using Splunk Oberverability Cloud. Can you please explain in detail how we can connect EC2 instances with Splunk Oberverability Cloud to send custom gRPC metrics from a golang application running on port UDP:8125 (statsd), if using https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/statsdreceiver is the only way then what changes I want to make in the configuration files related to the custom metric collection (has to be added any where in this directory), hostname, ports name mentioning in any files etc in details. Thanks
Hi @dorHerbesman , beware web-features.conf not web.features.conf, but maybe it's a mistyping. Anyway, what do you mean with Splunk master instance? you have to do this on the Search Heads, not on... See more...
Hi @dorHerbesman , beware web-features.conf not web.features.conf, but maybe it's a mistyping. Anyway, what do you mean with Splunk master instance? you have to do this on the Search Heads, not on other instances. Ciao. Giuseppe