All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I have an application on Splunkbase and want to rename it along with the commands and custom action. I have updated the app name by renaming the folder and updating the app ID. I've also updated t... See more...
I have an application on Splunkbase and want to rename it along with the commands and custom action. I have updated the app name by renaming the folder and updating the app ID. I've also updated the commands and custom action with the new name. While testing it on my local Splunk instance I observed that the existing application isn't getting replaced with a new one as the folder name and app name/ID is different compared to the older version. I believe that is fine as I can ask users to remove it from their instances, but I want the saved searches as well as local data of the older app to be available in the renamed app (newer app) but I'm unable to find any appropriate way of doing so. Lastly, There was a post in the community where the solution was to clone the local data from the older app to the newer app but that isn't feasible for me as I don't have access to the instances that the users are having with the older app installed. Can someone please help me with this? Also, I had a few other questions related to older applications: What is the procedure for deleting an already existing application on Splunkbase? Is emailing Splunk support the only way? Tried app archiving but it doesn't restrict the users from installing it. Is there a way to transfer the old Splunk application or account to a new account? any alternative to emailing the Splunk support team?  TL;DR How can I replace the already installed application on the user's end with the newly renamed application in Splunk? Since the names of the applications differ, Splunk installs a separate app for the new name instead of updating the existing one. If there are users who are already using the existing application and have the application's saved configurations and searches, how can we get it migrated to the newly renamed application?
Hi @Cramery_ , could you share a sample of your complete logs (aventually anonymized)? Anyway, when there's a backslash, it's always a problem because you need to add more backslashes than usual on... See more...
Hi @Cramery_ , could you share a sample of your complete logs (aventually anonymized)? Anyway, when there's a backslash, it's always a problem because you need to add more backslashes than usual on regex101.com. Do you need to use the regex in a search or in conf files? if in conf files, use the number of backslashes that you use in regex101, if in a search add one backslash. Ciao. Giuseppe
1.I have time attribute added as required. 2. I have set the Summarization Period to run once in every 5 mins. (*/5 * * * *) and the old summaries clean up is default 30 mins. 3. Added summary rang... See more...
1.I have time attribute added as required. 2. I have set the Summarization Period to run once in every 5 mins. (*/5 * * * *) and the old summaries clean up is default 30 mins. 3. Added summary range earliest time to 91 days. 4. Adding summariesonly = true, doesnt give any results --> for 1 hour as well.
Hi So I ran into a very odd and specific issue. I trx to regex-Filter a field, lets call it "parent". The field has the following structure: (not actual, the field I wanna regex, but easier to show ... See more...
Hi So I ran into a very odd and specific issue. I trx to regex-Filter a field, lets call it "parent". The field has the following structure: (not actual, the field I wanna regex, but easier to show the issue, so other options like "use .* or something wont work) C:\\Windows\\System32\\test\\ I try to regex this field like: "C:\\\\Windows\\\\System32\\\\test\\\\" This does not work But as soon as I delete this second folder "C:\\\\Windows\\\\.*\\\\test\\\\" it works. And this will be over all fields, no matter which field with a path I take, as soon as I enter this second folder, it will immediately stop working. I also tried to add different special characters, all numbers and letters, space, tab etc. also tried to change the "\\\\", Adding ".*System32.*" but nothing works out. Someone else ever ran into this issue and got a solution?
Hello @bishida, Thanks for the reply. I have gone through the repo for the statsdreceiver but I was not able to configure it successfully. receivers: statsd: statsd/2: endpoint: "localhost:8127... See more...
Hello @bishida, Thanks for the reply. I have gone through the repo for the statsdreceiver but I was not able to configure it successfully. receivers: statsd: statsd/2: endpoint: "localhost:8127" aggregation_interval: 70s enable_metric_type: true is_monotonic_counter: false timer_histogram_mapping: - statsd_type: "histogram" observer_type: "gauge" - statsd_type: "timing" observer_type: "histogram" histogram: max_size: 100 - statsd_type: "distribution" observer_type: "summary" summary: percentiles: [0, 10, 50, 90, 95, 100] I tried to configure above but it was not working, here I am not sure how Splunk Oberverability Cloud will know to listen to 8127 port. Let me explain my use case in detail: I have couple of EC2 Linux instance on which statsd server is running and it generating some custom gRPC metrics from a golang application running on port UDP:8125 (statsd). Now, I want these custom gRPC metrics from a golang application running on port UDP:8125 (statsd) to send to Splunk Oberverability Cloud, so that I can monitor these custom gRPC metrics there, but this we need to make a connection between EC2 Linux instance and Splunk Oberverability Cloud, Splunk Oberverability Cloud should able to recieve these custom gRPC metrics as we don't have any hostname/IP address for Splunk Oberverability Cloud we have to use some agent for doing this, I think we can using "splunk-otel-collector.service" Currently I am able to capture the predefined metrices such "^aws.ec2.cpu.utilization", system.filesystem.usage etc on my Splunk Oberverability Cloud but now I also want the custom gRPC metrics same like this. Before this setup I using a setup in which I was having multiple EC2 Linux instance on which statsd server was running and I was a serepate Spunk Enterprise EC2 instance and it was collecting all the metrics there. But Spunk Enterprise provide commands to connect instances to Spunk Enterprise using "./splunk enable listen 9997" and "./splunk add <destination_hostname>:9997" and I was using below configuration to do so. "statsd": { "statsd_max_packetsize": 1400, "statsd_server" : "destination_hostname", "statsd_port" : "8125" }, Same thing I want to achieve using Splunk Oberverability Cloud. Can you please explain in detail how we can connect EC2 instances with Splunk Oberverability Cloud to send custom gRPC metrics from a golang application running on port UDP:8125 (statsd), if using https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/statsdreceiver is the only way then what changes I want to make in the configuration files related to the custom metric collection (has to be added any where in this directory), hostname, ports name mentioning in any files etc in details. Thanks
The env variable OTEL_EXPORTER_OTLP_TRACES_HEADERS has not been created. Regarding the OTEL_OTLP_EXPORTER_ENDPOINT it was set tot http://localhost:4317. Now I chnaged to 4318. Regarding SPLUNK_... See more...
The env variable OTEL_EXPORTER_OTLP_TRACES_HEADERS has not been created. Regarding the OTEL_OTLP_EXPORTER_ENDPOINT it was set tot http://localhost:4317. Now I chnaged to 4318. Regarding SPLUNK_ACCESS_TOKEN, I have not changed this value to the new one. Changed now. I will restart the application and generates traffic again. Let you know.
I tried this as well and increased the depth_limit as well in limits.conf on HF under tenable addon local directory. still not working [rex] depth_limit=10000 my character limit is 9450 characte... See more...
I tried this as well and increased the depth_limit as well in limits.conf on HF under tenable addon local directory. still not working [rex] depth_limit=10000 my character limit is 9450 character total in an event. Still not
Hi @vn_g , the first question is what's the update frequency of your Data Model? if it's more than 30 minutes , you cannot run a search on a Data Model in tha last 30 minutes. Anyway, to search on... See more...
Hi @vn_g , the first question is what's the update frequency of your Data Model? if it's more than 30 minutes , you cannot run a search on a Data Model in tha last 30 minutes. Anyway, to search only on the Data Model without applying also the index events, you have to add to your searches summariesonly = True Ciao. Giuseppe
Thanks for the suggestion @bishida 
I did the changes, it is not working. Still data in indexing
How to pass earliest and latest values to a data model search?  Example if I select a time range picker of last 30 mins but still give earliest and latest in the normal search of last 24 hours, then ... See more...
How to pass earliest and latest values to a data model search?  Example if I select a time range picker of last 30 mins but still give earliest and latest in the normal search of last 24 hours, then earliest and latest parameters take precedence and works in a normal search. How to implement the same with datamodel query?  
Is it impossible to apply SSL to HEC in the Splunk trial version?  
Thanks. I'll take a look. I think I tried modifying that search macro once already. I'll try it again. 
I noticed one other thing that we should try. Since you're running a local instance of the OTel collector, can you unset the env variable OTEL_EXPORTER_OTLP_TRACES_HEADERS? We only want to send the t... See more...
I noticed one other thing that we should try. Since you're running a local instance of the OTel collector, can you unset the env variable OTEL_EXPORTER_OTLP_TRACES_HEADERS? We only want to send the token that way when you're not using a local collector. You already changed your OTEL_OTLP_EXPORTER_ENDPOINT back to http://localhost:4318, correct? Do you have your SPLUNK_ACCESS_TOKEN value set in /etc/otel/collector/splunk-otel-collector.conf? That token should have INGEST and API capabilities.  Also check that the correct secret is used in that token since you said you rotated the last one.
You may see that the UI is a front end to setting the eventtype and macro configurations called sentinelone_base_index, which have the definition   index IN (xx)   so you can edit these and add i... See more...
You may see that the UI is a front end to setting the eventtype and macro configurations called sentinelone_base_index, which have the definition   index IN (xx)   so you can edit these and add in your indexes   index IN (xx,yy)   There is also a configuration file, sentinelone_settings.conf, which has a base_index = XX, so not the same as the others, but I am not sure where this is used, if anywhere. I can't see any obvious usage of the macro, but you could try updating the macro and eventtype to see if that works
So, you want to find any event that has the word error in the _raw event and then somehow create some kind of grouping of those events. Your requirement is way to vague to be able to do grouping bec... See more...
So, you want to find any event that has the word error in the _raw event and then somehow create some kind of grouping of those events. Your requirement is way to vague to be able to do grouping because there is no way for any of us to tell you how you can group your messages without some knowledge of your data. Other than the basic  index=* error | stats count by _raw which is probably next to useless, as you will get 1 for all errors. You could try using the cluster command, e.g. index=Your_Indexes error | cluster showcount=t | table cluster_count _raw | sort -cluster_count which will attempt to cluster your data - see here for the command description https://docs.splunk.com/Documentation/Splunk/9.3.2/SearchReference/Cluster  
@Ste The solution is to use addinfo, if you make the search based on the time picker, then use addinfo in the subsearch, it will generate info_max_time, which is the normalised end epoch time for the... See more...
@Ste The solution is to use addinfo, if you make the search based on the time picker, then use addinfo in the subsearch, it will generate info_max_time, which is the normalised end epoch time for the time picker, then you can use that in your subsearch instead, i.e. index="_audit" [| makeresults | addinfo | eval earliest=relative_time(info_max_time,"-1d@d") | eval latest=relative_time(info_max_time,"@d") | fields earliest latest | format] | table _time user  
It seems you do actually have correlation, which is the 3rd and 4th path elements of the source, so you can merge the event data on variableA and variableB using eventstats like this ``` Having extr... See more...
It seems you do actually have correlation, which is the 3rd and 4th path elements of the source, so you can merge the event data on variableA and variableB using eventstats like this ``` Having extracted variableC from _raw this just clears variableC from all events that are not the primary match, i.e. file.txt ``` | eval variableC=if(match(source, "\/file2.txt$"), variableC, null()) ``` Need to get rid of the second data set events ``` | eval keep=if(isnull(variableC), 1, 0) ``` Now collect all values (1) of variableC by the matching path elements ``` | eventstats values(variableC) as variableC by variableA, variableB ``` Now just hang on to first dataset ``` | where keep=1 Here's a simulated working example | makeresults count=10 ``` Create two types of path d0 and d1 /d3 ``` | eval source="/dir1/dir2/d".(random() % 2)."/d3/file.txt" ``` So we get an incorrect variableC extraction we don't want ``` | eval _raw="main_event_has_raw_match/" ``` Now add in a match for the two types above ``` | append [ | makeresults count=2 | streamstats c | eval source="/dir1/dir2/d".(if(c=1, "0", "1"))."/d3/file2.txt" | eval _raw="bla".c."/" | fields - c ] | rex field=source "\/dir1\/dir2\/(?<variableA>.+?(?=\/))\/(?<variableB>.+?(?=\/))\/.*" | rex field=_raw "(?<variableC>.+?(?=\/))*" | eval variableC=if(match(source, "\/file2.txt$"), variableC, null()) | eval keep=if(isnull(variableC), 1, 0) | eventstats values(variableC) as variableC by variableA, variableB | where keep=1 | table variable* | sort variableA  
Are you absolutely sure that your forwarded events are all raw_event and not rendered_event? I had this issue where my event collector was forwarding mixed logs. You must check the event collector an... See more...
Are you absolutely sure that your forwarded events are all raw_event and not rendered_event? I had this issue where my event collector was forwarding mixed logs. You must check the event collector and make sure all forwarded events are of the same format.
While the oneliner is relatively OK (though the nitpicker in me could point out some bad practices ;-)) it will replace all occurrences of a _string_ even if it's used in a completely different conte... See more...
While the oneliner is relatively OK (though the nitpicker in me could point out some bad practices ;-)) it will replace all occurrences of a _string_ even if it's used in a completely different context, not just an index name. @deepthi5The usual disclaimer - automatic finding of such things will not cover all possible usages. Index can be specified directly in search, can be specified within a macro, an eventtype or even dynamically using a subsearch.