All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

How can we concatenate values from one field and put it in a new variable with commas. e.g If I run a search , I get number of host in host field. I want to concatenate them all in one field separat... See more...
How can we concatenate values from one field and put it in a new variable with commas. e.g If I run a search , I get number of host in host field. I want to concatenate them all in one field separated by commas.
in props.conf [tenable:sc:vuln] TRANSFORMS-Removetenable_remove_logs = tenable_remove_logs transforms.conf [tenable_remove_logs] SOURCE_KEY = _raw REGEX = ABCSCAN DEST_KEY = queue FORMAT =... See more...
in props.conf [tenable:sc:vuln] TRANSFORMS-Removetenable_remove_logs = tenable_remove_logs transforms.conf [tenable_remove_logs] SOURCE_KEY = _raw REGEX = ABCSCAN DEST_KEY = queue FORMAT = nullQueue     Do you have any other TRANSFORMS-<class> or REPORTS-<class> statements in this props?  The order of processing could be creating issues.  I'm throwing hail marys since I'm at a loss.
Hi,   Our Linux machine has reached the End of Support, so we are moving the Cluster Master from one machine to another. I set up the cluster master in the new hardware and it was working well, bu... See more...
Hi,   Our Linux machine has reached the End of Support, so we are moving the Cluster Master from one machine to another. I set up the cluster master in the new hardware and it was working well, but when I changed the master node URL in the indexer it was not working. The indexer doesn't turn on by itself and even when I turn it on manually, the indexer stays running for some time but during that time the web UI of the indexer does not work. In some time the indexer stops automatically. The same happened for another indexer as well. When I revert to the old cluster master, all the issues are sorted automatically. Splunk indexer always keeps running, web UI is available. No issues are noticed. Any idea why the indexer keeps shutting down? I am Splunk version 9.0.4   Regards, Pravin
Thank you for the suggestion.  I could not test it, as an alternative approach has been adopted in the meantime.
By accident I ran into the same problem.  The brute force solution looks like:  | eval my_time1=strptime(genZeit, "%Y-%m-%dT%H:%M:%S%:z") | eval my_time2=strptime(genZeit, "%Y-%m-%dT%H:%M:%S.... See more...
By accident I ran into the same problem.  The brute force solution looks like:  | eval my_time1=strptime(genZeit, "%Y-%m-%dT%H:%M:%S%:z") | eval my_time2=strptime(genZeit, "%Y-%m-%dT%H:%M:%S.%3N%:z") | eval my_time=if(isnotnull(my_time1),my_time1,my_time2) Try to convert the time with both of the possible time formats (be careful: my example will not reflect the time format of the original question), take the result which is not null.  
When I edit a correlation search, I want to configure the time of the drill-down search.  If I put "1h" in the form "Earliest Offset", it inputs the unix time stamp in milliseconds. Splunk expects... See more...
When I edit a correlation search, I want to configure the time of the drill-down search.  If I put "1h" in the form "Earliest Offset", it inputs the unix time stamp in milliseconds. Splunk expects the unix time stamp in seconds. Is there a workaround for this issue? ->  Correct would be:    
i did it on each search head separately and it worked! too bad there's no way to do it from the master and deploy it them but atleast it works. thanks for the help!
Hello @bishida, I have gone through the repo for the statsdreceiver but I was not able to configure it successfully. receivers: statsd: statsd/2: endpoint: "localhost:8127" aggregation_interval:... See more...
Hello @bishida, I have gone through the repo for the statsdreceiver but I was not able to configure it successfully. receivers: statsd: statsd/2: endpoint: "localhost:8127" aggregation_interval: 70s enable_metric_type: true is_monotonic_counter: false timer_histogram_mapping: - statsd_type: "histogram" observer_type: "gauge" - statsd_type: "timing" observer_type: "histogram" histogram: max_size: 100 - statsd_type: "distribution" observer_type: "summary" summary: percentiles: [0, 10, 50, 90, 95, 100] I tried to configure above but it was not working, here I am not sure how Splunk Oberverability Cloud will know to listen to 8127 post. Let me explain my use case in detail: I have couple of EC2 Linux instance on which statsd server is running and it generating some custom gRPC metrics from a golang application running on port UDP:8125 (statsd). Now, I want these custom gRPC metrics from a golang application running on port UDP:8125 (statsd) to send to Splunk Oberverability Cloud, so that I can monitor these custom gRPC metrics there, but this we need to make a connection between EC2 Linux instance and Splunk Oberverability Cloud, Splunk Oberverability Cloud should able to recieve these custom gRPC metrics as we don't have any hostname/IP address for Splunk Oberverability Cloud we have to use some agent for doing this, I think we can using "splunk-otel-collector.service" Currently I am able to capture the predefined metrices such "^aws.ec2.cpu.utilization", system.filesystem.usage etc on my Splunk Oberverability Cloud but now I also want the custom gRPC metrics same like this. Before this setup I using a setup in which I was having multiple EC2 Linux instance on which statsd server was running and I was a serepate Spunk Enterprise EC2 instance and it was collecting all the metrics there. But Spunk Enterprise provide commands to connect instances to Spunk Enterprise using "./splunk enable listen 9997" and "./splunk add <destination_hostname>:9997" and I was using below configuration to do so. "statsd": { "statsd_max_packetsize": 1400, "statsd_server" : "destination_hostname", "statsd_port" : "8125" }, Same thing I want to achieve using Splunk Oberverability Cloud. Can you please explain in detail how we can connect EC2 instances with Splunk Oberverability Cloud to send custom gRPC metrics from a golang application running on port UDP:8125 (statsd), if using https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/statsdreceiver is the only way then what changes I want to make in the configuration files related to the custom metric collection (has to be added any where in this directory), hostname, ports name mentioning in any files etc in details. Thanks
Hello, I have an Splunk Connect for Syslog (SC4S) server that retrieves logs from a source and transmits them to Splunk indexers. But in order to reduce the number of events, I want to filter the l... See more...
Hello, I have an Splunk Connect for Syslog (SC4S) server that retrieves logs from a source and transmits them to Splunk indexers. But in order to reduce the number of events, I want to filter the logs at the sc4s level. Note that the sc4s tool uses syslog-ng for filtering and parsing. The use case is as follows: when an event arrives on the sc4s server and contains an ip address of 10.9.40.245, the event is dropped. Does anyone have any idea how to create this filter on SC4S? Thank you.
Well, unfortunately, as I stated above "neither fixes the issue." It doesn't matter how I configure the UF props.conf or the SH props.conf, Splunk is refusing to parse JSON properly. Even though othe... See more...
Well, unfortunately, as I stated above "neither fixes the issue." It doesn't matter how I configure the UF props.conf or the SH props.conf, Splunk is refusing to parse JSON properly. Even though other JSON datafeeds work just fine. Guess I'll have to open another ticket with Splunk.
Changed everything as mentioned before, but no data collected into O11y Cloud.
Hello @bishida, I have gone through the repo for the statsdreceiver but I was not able to configure it successfully. receivers: statsd: statsd/2: endpoint: "localhost:8127" aggregation_interval:... See more...
Hello @bishida, I have gone through the repo for the statsdreceiver but I was not able to configure it successfully. receivers: statsd: statsd/2: endpoint: "localhost:8127" aggregation_interval: 70s enable_metric_type: true is_monotonic_counter: false timer_histogram_mapping: - statsd_type: "histogram" observer_type: "gauge" - statsd_type: "timing" observer_type: "histogram" histogram: max_size: 100 - statsd_type: "distribution" observer_type: "summary" summary: percentiles: [0, 10, 50, 90, 95, 100] I tried to configure above but it was not working, here I am not sure how Splunk Oberverability Cloud will know to listen to 8127 post. Let me explain my use case in detail: I have couple of EC2 Linux instance on which statsd server is running and it generating some custom gRPC metrics from a golang application running on port UDP:8125 (statsd). Now, I want these custom gRPC metrics from a golang application running on port UDP:8125 (statsd) to send to Splunk Oberverability Cloud, so that I can monitor these custom gRPC metrics there, but this we need to make a connection between EC2 Linux instance and Splunk Oberverability Cloud, Splunk Oberverability Cloud should able to recieve these custom gRPC metrics as we don't have any hostname/IP address for Splunk Oberverability Cloud we have to use some agent for doing this, I think we can using "splunk-otel-collector.service" Currently I am able to capture the predefined metrices such "^aws.ec2.cpu.utilization", system.filesystem.usage etc on my Splunk Oberverability Cloud but now I also want the custom gRPC metrics same like this. Before this setup I using a setup in which I was having multiple EC2 Linux instance on which statsd server was running and I was a serepate Spunk Enterprise EC2 instance and it was collecting all the metrics there. But Spunk Enterprise provide commands to connect instances to Spunk Enterprise using "./splunk enable listen 9997" and "./splunk add <destination_hostname>:9997" and I was using below configuration to do so. "statsd": { "statsd_max_packetsize": 1400, "statsd_server" : "destination_hostname", "statsd_port" : "8125" }, Same thing I want to achieve using Splunk Oberverability Cloud. Can you please explain in detail how we can connect EC2 instances with Splunk Oberverability Cloud to send custom gRPC metrics from a golang application running on port UDP:8125 (statsd), if using https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/statsdreceiver is the only way then what changes I want to make in the configuration files related to the custom metric collection (has to be added any where in this directory), hostname, ports name mentioning in any files etc in details. Thanks
Hi @dorHerbesman , beware web-features.conf not web.features.conf, but maybe it's a mistyping. Anyway, what do you mean with Splunk master instance? you have to do this on the Search Heads, not on... See more...
Hi @dorHerbesman , beware web-features.conf not web.features.conf, but maybe it's a mistyping. Anyway, what do you mean with Splunk master instance? you have to do this on the Search Heads, not on other instances. Ciao. Giuseppe  
Hi @ameet , the 32 bit UF should run on a 64 bit OS, but it's always better to have the correct version! You can override installation, even if it's better to remove the old installation. To avoid... See more...
Hi @ameet , the 32 bit UF should run on a 64 bit OS, but it's always better to have the correct version! You can override installation, even if it's better to remove the old installation. To avoid this kind of issues, It's always better to plan the installation, creating (e.g. on Excel) a list of destination systems with the OS and architecture of each one, to avoid to install the wrong version. Ciao. Giuseppe
Hello, I am going through the steps of updating Splunk SOAR Unpriv from the site documentation, but when I copy the new package to the Splunk-soar folder and want to start the phantom service, I enco... See more...
Hello, I am going through the steps of updating Splunk SOAR Unpriv from the site documentation, but when I copy the new package to the Splunk-soar folder and want to start the phantom service, I encounter the error Phantom Startup failed: postgresql-11
i have tried pasting your code into /opt/splunk/etc/system/local/web.features.conf of my splunk master instance and restarting it (including rolling-restart) and no luck. maybe i should put it somewhe... See more...
i have tried pasting your code into /opt/splunk/etc/system/local/web.features.conf of my splunk master instance and restarting it (including rolling-restart) and no luck. maybe i should put it somewhere in my shcluster/apps? any other suggestion?
Hi!! I'm very new to Splunk and just want some advise.  I accidentally installed a 32 bit version of the universal forwarder on my test linux machine.  is it fine to install the 64 bit version on to... See more...
Hi!! I'm very new to Splunk and just want some advise.  I accidentally installed a 32 bit version of the universal forwarder on my test linux machine.  is it fine to install the 64 bit version on top without removing the 32bit version and will this cause issues later?  i'm also running splunk web on the same linux machine too.  any advise or suggestion please.  Amit 
I dont think this is related to my case. if change is replicated with in 2 SH, why changes are not replicated on 1 SH, all 3 SH are sending logs to indexers  
Hello @bishida, I have gone through the repo for the statsdreceiver but I was not able to configure it successfully. receivers: statsd: statsd/2: endpoint: "localhost:8127" aggregation_interval:... See more...
Hello @bishida, I have gone through the repo for the statsdreceiver but I was not able to configure it successfully. receivers: statsd: statsd/2: endpoint: "localhost:8127" aggregation_interval: 70s enable_metric_type: true is_monotonic_counter: false timer_histogram_mapping: - statsd_type: "histogram" observer_type: "gauge" - statsd_type: "timing" observer_type: "histogram" histogram: max_size: 100 - statsd_type: "distribution" observer_type: "summary" summary: percentiles: [0, 10, 50, 90, 95, 100] I tried to configure above but it was not working, here I am not sure how Splunk Oberverability Cloud will know to listen to 8127 post. Let me explain my use case in detail: I have couple of EC2 Linux instance on which statsd server is running and it generating some custom gRPC metrics from a golang application running on port UDP:8125 (statsd). Now, I want these custom gRPC metrics from a golang application running on port UDP:8125 (statsd) to send to Splunk Oberverability Cloud, so that I can monitor these custom gRPC metrics there, but this we need to make a connection between EC2 Linux instance and Splunk Oberverability Cloud, Splunk Oberverability Cloud should able to recieve these custom gRPC metrics as we don't have any hostname/IP address for Splunk Oberverability Cloud we have to use some agent for doing this, I think we can using "splunk-otel-collector.service" Currently I am able to capture the predefined metrices such "^aws.ec2.cpu.utilization", system.filesystem.usage etc on my Splunk Oberverability Cloud but now I also want the custom gRPC metrics same like this. Before this setup I using a setup in which I was having multiple EC2 Linux instance on which statsd server was running and I was a serepate Spunk Enterprise EC2 instance and it was collecting all the metrics there. But Spunk Enterprise provide commands to connect instances to Spunk Enterprise using "./splunk enable listen 9997" and "./splunk add <destination_hostname>:9997" and I was using below configuration to do so. "statsd": { "statsd_max_packetsize": 1400, "statsd_server" : "destination_hostname", "statsd_port" : "8125" }, Same thing I want to achieve using Splunk Oberverability Cloud. Can you please explain in detail how we can connect EC2 instances with Splunk Oberverability Cloud to send custom gRPC metrics from a golang application running on port UDP:8125 (statsd), if using https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/statsdreceiver is the only way then what changes I want to make in the configuration files related to the custom metric collection (has to be added any where in this directory), hostname, ports name mentioning in any files etc in details. Thanks
Hello, We have a Splunk indexer cluster with two searchheads and would like to use the addon in the cluster: https://splunkbase.splunk.com/app/4055 We installed the addon on the searchhead without ... See more...
Hello, We have a Splunk indexer cluster with two searchheads and would like to use the addon in the cluster: https://splunkbase.splunk.com/app/4055 We installed the addon on the searchhead without ES and on all indexers via ClusterManager App. Then we set up all the inputs for the addon on the searchhead and could not select the index “M365” but only enter it manually. The problem now is that this index is not filled by the indexers! What are we doing wrong here?