All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thanks. I'll take a look. I think I tried modifying that search macro once already. I'll try it again. 
I noticed one other thing that we should try. Since you're running a local instance of the OTel collector, can you unset the env variable OTEL_EXPORTER_OTLP_TRACES_HEADERS? We only want to send the t... See more...
I noticed one other thing that we should try. Since you're running a local instance of the OTel collector, can you unset the env variable OTEL_EXPORTER_OTLP_TRACES_HEADERS? We only want to send the token that way when you're not using a local collector. You already changed your OTEL_OTLP_EXPORTER_ENDPOINT back to http://localhost:4318, correct? Do you have your SPLUNK_ACCESS_TOKEN value set in /etc/otel/collector/splunk-otel-collector.conf? That token should have INGEST and API capabilities.  Also check that the correct secret is used in that token since you said you rotated the last one.
You may see that the UI is a front end to setting the eventtype and macro configurations called sentinelone_base_index, which have the definition   index IN (xx)   so you can edit these and add i... See more...
You may see that the UI is a front end to setting the eventtype and macro configurations called sentinelone_base_index, which have the definition   index IN (xx)   so you can edit these and add in your indexes   index IN (xx,yy)   There is also a configuration file, sentinelone_settings.conf, which has a base_index = XX, so not the same as the others, but I am not sure where this is used, if anywhere. I can't see any obvious usage of the macro, but you could try updating the macro and eventtype to see if that works
So, you want to find any event that has the word error in the _raw event and then somehow create some kind of grouping of those events. Your requirement is way to vague to be able to do grouping bec... See more...
So, you want to find any event that has the word error in the _raw event and then somehow create some kind of grouping of those events. Your requirement is way to vague to be able to do grouping because there is no way for any of us to tell you how you can group your messages without some knowledge of your data. Other than the basic  index=* error | stats count by _raw which is probably next to useless, as you will get 1 for all errors. You could try using the cluster command, e.g. index=Your_Indexes error | cluster showcount=t | table cluster_count _raw | sort -cluster_count which will attempt to cluster your data - see here for the command description https://docs.splunk.com/Documentation/Splunk/9.3.2/SearchReference/Cluster  
@Ste The solution is to use addinfo, if you make the search based on the time picker, then use addinfo in the subsearch, it will generate info_max_time, which is the normalised end epoch time for the... See more...
@Ste The solution is to use addinfo, if you make the search based on the time picker, then use addinfo in the subsearch, it will generate info_max_time, which is the normalised end epoch time for the time picker, then you can use that in your subsearch instead, i.e. index="_audit" [| makeresults | addinfo | eval earliest=relative_time(info_max_time,"-1d@d") | eval latest=relative_time(info_max_time,"@d") | fields earliest latest | format] | table _time user  
It seems you do actually have correlation, which is the 3rd and 4th path elements of the source, so you can merge the event data on variableA and variableB using eventstats like this ``` Having extr... See more...
It seems you do actually have correlation, which is the 3rd and 4th path elements of the source, so you can merge the event data on variableA and variableB using eventstats like this ``` Having extracted variableC from _raw this just clears variableC from all events that are not the primary match, i.e. file.txt ``` | eval variableC=if(match(source, "\/file2.txt$"), variableC, null()) ``` Need to get rid of the second data set events ``` | eval keep=if(isnull(variableC), 1, 0) ``` Now collect all values (1) of variableC by the matching path elements ``` | eventstats values(variableC) as variableC by variableA, variableB ``` Now just hang on to first dataset ``` | where keep=1 Here's a simulated working example | makeresults count=10 ``` Create two types of path d0 and d1 /d3 ``` | eval source="/dir1/dir2/d".(random() % 2)."/d3/file.txt" ``` So we get an incorrect variableC extraction we don't want ``` | eval _raw="main_event_has_raw_match/" ``` Now add in a match for the two types above ``` | append [ | makeresults count=2 | streamstats c | eval source="/dir1/dir2/d".(if(c=1, "0", "1"))."/d3/file2.txt" | eval _raw="bla".c."/" | fields - c ] | rex field=source "\/dir1\/dir2\/(?<variableA>.+?(?=\/))\/(?<variableB>.+?(?=\/))\/.*" | rex field=_raw "(?<variableC>.+?(?=\/))*" | eval variableC=if(match(source, "\/file2.txt$"), variableC, null()) | eval keep=if(isnull(variableC), 1, 0) | eventstats values(variableC) as variableC by variableA, variableB | where keep=1 | table variable* | sort variableA  
Are you absolutely sure that your forwarded events are all raw_event and not rendered_event? I had this issue where my event collector was forwarding mixed logs. You must check the event collector an... See more...
Are you absolutely sure that your forwarded events are all raw_event and not rendered_event? I had this issue where my event collector was forwarding mixed logs. You must check the event collector and make sure all forwarded events are of the same format.
While the oneliner is relatively OK (though the nitpicker in me could point out some bad practices ;-)) it will replace all occurrences of a _string_ even if it's used in a completely different conte... See more...
While the oneliner is relatively OK (though the nitpicker in me could point out some bad practices ;-)) it will replace all occurrences of a _string_ even if it's used in a completely different context, not just an index name. @deepthi5The usual disclaimer - automatic finding of such things will not cover all possible usages. Index can be specified directly in search, can be specified within a macro, an eventtype or even dynamically using a subsearch.
As usual, there might probably be more than one solution to a problem (in your case - ingestion of Azure Firewall logs). True, Event Hub will give you a near-realtime (it's not strictly realtime sinc... See more...
As usual, there might probably be more than one solution to a problem (in your case - ingestion of Azure Firewall logs). True, Event Hub will give you a near-realtime (it's not strictly realtime since it's pull-based as far as I remember) but the storage-based method might be cheaper and if you're ok with the latency it might be sufficient. Your original problems were most probably caused by misconfigured sourcetype. The input data was not broken into events properly and/or the events were to long and got truncated. As a result json extractions didn't happen because the events were not well-formed jsons.
 I want to group unique error string coming from different logs. Events are from different application having different logging format. I am creating a report that shows count of events for all the... See more...
 I want to group unique error string coming from different logs. Events are from different application having different logging format. I am creating a report that shows count of events for all the unique error string. Boundary condition for determining which string to match: All the events that have "error" keyword in the log statement
Thanks for looking into it. I think your solution will work if there is any specific set of errors but in my case there is no specific list of errors. Errors are from different logs with different l... See more...
Thanks for looking into it. I think your solution will work if there is any specific set of errors but in my case there is no specific list of errors. Errors are from different logs with different logging format
Not by lowering the fonts, but by using special unicode characters that look like ampersands but are not treated as ampersands. Try copying and pasting the ampersand-like characters from my post.
I'm not sure if that URL is correctly written in your post, but the 8000 is a port, not a part of the path. E.g. "https://my_sh:8000/saml/logout" If that is not the issue, could you try searching yo... See more...
I'm not sure if that URL is correctly written in your post, but the 8000 is a port, not a part of the path. E.g. "https://my_sh:8000/saml/logout" If that is not the issue, could you try searching your internal logs for keywords like "Saml" or "samlresponse"? Perhaps there will be a more detailed error message. index=_internal SamlResponse  
In metrics finder we can't see anything about the service.name. As far as I know my PHP is using zero code instrumentation. I have generated a lot of traffic and nothing. I performed a purchase,... See more...
In metrics finder we can't see anything about the service.name. As far as I know my PHP is using zero code instrumentation. I have generated a lot of traffic and nothing. I performed a purchase, items in a cart and etc. Nothing of these spans are being reflected into O11y Cloud. I don't about splunk HEC. I'm just starting using Splunk O11y. I need to study more about the http event collector.    
I have configured Splunk with SAML (ADFS) but We are facing an issue during logout, with the following error message: "Failed to validate SAML logout response received from IdP I have inserted t... See more...
I have configured Splunk with SAML (ADFS) but We are facing an issue during logout, with the following error message: "Failed to validate SAML logout response received from IdP I have inserted the below URL in logout in SAML configuration  " https://my_sh:8000/saml/logout" how can I overcome this issue??
Splunk Support Update: Regarding your question about the best way to ingest Azure Firewall logs into Splunk, I would recommend using Event Hub for this purpose. Event Hub allows you to stream real-t... See more...
Splunk Support Update: Regarding your question about the best way to ingest Azure Firewall logs into Splunk, I would recommend using Event Hub for this purpose. Event Hub allows you to stream real-time data, which is ideal for continuous log ingestion. On the other hand, using Storage Blob as an input can lead to delays, especially as log sizes increase, and could also result in data duplication.
In a past environment I ran isolated HF's specifically for HEC with no other purpose.  I was able to tune them up on HEC processing cause there was no conflicting use cases for the compute power.  I ... See more...
In a past environment I ran isolated HF's specifically for HEC with no other purpose.  I was able to tune them up on HEC processing cause there was no conflicting use cases for the compute power.  I had 2 sites with 2 HF's per site all acting in full HA behind site LB and local LB configurations.  The HF's were pointed to the Deployment Servers so the HEC inputs and config could be updated in a central location with auto distribution. To size up I would only have to stand up another HF in either location one at a time or in bulk.  Have them point to the DS and confirm the local logs were found at the indexers.  Then I could add the new server addresses to the server class for the HEC inputs/config to upload.  Then update the LB to include the new server(s) in the pool for that particular site. Very convenient and not difficult once setup - although not for novice users though.  Plugging an HF to a DS can come with issues if not 100% aware of how apps are named.
The messages in your /var/log/syslog appear to be a separate issue. If you want to collect logs, you can either configure your "splunk_hec" exporter to use your own Splunk HEC endpoint and token or y... See more...
The messages in your /var/log/syslog appear to be a separate issue. If you want to collect logs, you can either configure your "splunk_hec" exporter to use your own Splunk HEC endpoint and token or you can disable logs for now by removing "splunk_hec" from your logs pipeline (service->pipelines->logs->exporters [remove splunk_hec]) I'm wondering if you are getting APM data into O11y Cloud but perhaps aren't generating traffic that creates spans? Can you be sure to generate some test traffic in your app that will definitely create a span? Something that calls another app or API would be ideal. You can also look for clues in the Metric Finder. Search there for the service.name you defined in your instrumentation and see if you're getting any metrics for that service.name. 
Did you use the Windows uninstaller to remove the programs?  That's the preferred method on Windows systems. It's not necessary to install both Splunk Enterprise and a Universal Forwarder (UF) on th... See more...
Did you use the Windows uninstaller to remove the programs?  That's the preferred method on Windows systems. It's not necessary to install both Splunk Enterprise and a Universal Forwarder (UF) on the same system.  The installers don't event allow it, IIRC.  Everything the UF can do Splunk Enterprise can do.
I had used Splunk Enterprise(Free Trial version)  and Universal Forwarder on my PC(Windows11). But, I uninstalled these becouse my PC's trouble. I want to re-install SE and UF, but installers outpu... See more...
I had used Splunk Enterprise(Free Trial version)  and Universal Forwarder on my PC(Windows11). But, I uninstalled these becouse my PC's trouble. I want to re-install SE and UF, but installers output error and "This version Splunk Enterprsise has already been installed in this PC". I tried deleting registory editor and program files of Splunk and UniversalFowarder,  run command "sc delete Splunk" in cmd. But installer's output is same. If you know this troubleshooting, please tell me.