All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

From the error "Error occurred reading enterprise-attack.json" Could it be that it can’t find the file or it's a permissions issue? A few things to check: Verify Permissions (User/Role) access to... See more...
From the error "Error occurred reading enterprise-attack.json" Could it be that it can’t find the file or it's a permissions issue? A few things to check: Verify Permissions (User/Role) access to the security essentials app. Verify if it was installed correctly with correct permissions (via Gui or copy to /opt/splunk/etc/apps/ folder with correct splunk OS level permissions, assumiing this was linux based) Uninstall and re-install.   See how that goes first.
There are several ways to send data to HEC and not all of them use that format.  The raw endpoint should accept events in your desired format.  See https://docs.splunk.com/Documentation/Splunk/8.0.5/... See more...
There are several ways to send data to HEC and not all of them use that format.  The raw endpoint should accept events in your desired format.  See https://docs.splunk.com/Documentation/Splunk/8.0.5/Data/FormateventsforHTTPEventCollector#Format_events_for_HTTP_Event_Collector
Hi Team, Our Splunk Search heads are hosted in Cloud and managed by Support and currently we are running with the latest version (9.1.2308.203).  This relates to the Max Lines configuration within ... See more...
Hi Team, Our Splunk Search heads are hosted in Cloud and managed by Support and currently we are running with the latest version (9.1.2308.203).  This relates to the Max Lines configuration within the Format segment of the Search and Reporting App. Previously, Splunk defaulted to displaying 20 or more lines in search results within the Search and Reporting App. As an administrator responsible for extracting Splunk logs across various applications over the years, I never found the need to expand concise search results to read all lines. However, in recent weeks, perhaps following an upgrade of the Splunk Search heads, I've noticed that each time I open a new Splunk search window or the existing Splunk tab times out and auto-refreshes, the Format > Max Lines option resets to 5. As a result, I consistently have to adjust it after nearly every search, which has become cumbersome. Therefore, kindly provide guidance on changing the default value from 5 to 20 in the Search and Reporting App on Adhoc & ES Search heads. This adjustment would ease the inconvenience experienced by numerous customers and end-users who currently find it troublesome to customize it for each search.   The file is ui-prefs.conf, so I've filed a case with support to address this issue. Unfortunately, support wasn't able to make the necessary changes at the backend and suggested that I create a custom app and deploy it in the app upload section. Consequently, I created a custom app, deployed it, and it successfully passed the vetting process. Afterward, I restarted the Search head, but the changes didn't take effect. Upon reaching out to support again, they were unable to provide a solution for the issue. Therefore, I require assistance in resolving this matter. So refer the screenshot of the app which I have deployed for reference. Created a app as below: MaxLines_Values folder. Inside MaxLines_Value folder there would be default and metadata folder as mentioned in screenshot. So kindly help on the same.  
Essentially if the only change is OS it should be fairly easy to migrate. Ensure the new systems have the same IP or Hostnames depending on whether you use names or IPs in the configs. Ensure the spl... See more...
Essentially if the only change is OS it should be fairly easy to migrate. Ensure the new systems have the same IP or Hostnames depending on whether you use names or IPs in the configs. Ensure the splunk user and group are created on the new servers, follow instructions for installing by tar file. Stop the current servers, tar up the /opt/splunk folder and any data store folder. Then untar them onto the new boxes. 
@gcusello  index=abc source="http:clhub-preprod" sourcetype=_json "ct-fin-abc-apps-papi-v1-uw2-ut" "Action Name" | rex field=event "^(?<event_type>\w+)" | where event_type="INFO" | spath input_... See more...
@gcusello  index=abc source="http:clhub-preprod" sourcetype=_json "ct-fin-abc-apps-papi-v1-uw2-ut" "Action Name" | rex field=event "^(?<event_type>\w+)" | where event_type="INFO" | spath input_field=event event field contains the above log data
Hi @kranthimutyala2, which sourcetype are you using? did you tried json or _json? In this case the INDEXED_EXTRACTIONS=json is enabled Ciao. Giuseppe
@gcusello tried but it didnt work
Hi Community, I have this global setting in inputs.conf:   [http] enableSSL = 1 port = 8088 requireClientCert = false serverCert = $SPLUNK_HOME/etc/auth/my_certificates/certificate.cer     I ha... See more...
Hi Community, I have this global setting in inputs.conf:   [http] enableSSL = 1 port = 8088 requireClientCert = false serverCert = $SPLUNK_HOME/etc/auth/my_certificates/certificate.cer     I have 2 [token_name] stanzas configured working fine but now I have the need to use a different server certificate for one stanza. So I'd like to do something like this:   [http://stanza1] token = token1 index = index1 sourcetype = sourcetype1 [http://stanza2] token = token2 index = index2 sourcetype = sourcetype2 serverCert = $SPLUNK_HOME/etc/auth/my_certificates/certificate_2.cer     I'm not sure it is possible though since in the doc it is written that per-token settings are only these: connection_host disabled index indexes persistentQueueSize source queueSize sourcetype token   Any hint?   Thanks, Marta  
Hi @myte , you could try to schedule the two searches (with the frequency that meets your requirements), separately or together, saving results in a summary index. Then you can run a very efficient... See more...
Hi @myte , you could try to schedule the two searches (with the frequency that meets your requirements), separately or together, saving results in a summary index. Then you can run a very efficient search on the summary index. Ciao. Giuseppe
Hi @kranthimutyala2, did you tried to use the spath command (https://docs.splunk.com/Documentation/SplunkCloud/latest/SearchReference/Spath)? It automatically recognize all the fields-value pairs. ... See more...
Hi @kranthimutyala2, did you tried to use the spath command (https://docs.splunk.com/Documentation/SplunkCloud/latest/SearchReference/Spath)? It automatically recognize all the fields-value pairs. Ciao. Giuseppe
Hello @somesoni2 @andrewtrobec @edoardo_vicendo  we saw same issue and it can wipe your changes on client's local folder. So is that normal that even if we don't have local folder in deployment-apps... See more...
Hello @somesoni2 @andrewtrobec @edoardo_vicendo  we saw same issue and it can wipe your changes on client's local folder. So is that normal that even if we don't have local folder in deployment-apps/app, reloading deployment server creates deployment-apps/app/local + deployment-apps/app/local/app.conf ? We had to leave it, resync correctly then at the end add our specific conf files on the heavyforwarder etc/apps/app/local Thanks.
I need to connect data from a third party application via HEC to Splunk. It sends data in this format 1 event per request: { "field1":"value", "field2":"value" } After looking at the documentation ... See more...
I need to connect data from a third party application via HEC to Splunk. It sends data in this format 1 event per request: { "field1":"value", "field2":"value" } After looking at the documentation for HEC, I discovered that for events to work correctly, they must have the following format: { "event":{ "field1":"value", "field2":"value" } }  Otherwise I receive an error: {"text":"No data","code":5} I don't have the ability to change the event format on the third-party application side. How can this problem be solved?
I want to extract all the key value pairs from this event  dynamically Can someone help with the query   INFO 2024-04-29 16:30:08,456 [[MuleRuntime].uber.05: [ct-fin-abc-apps-papi-v1-uw2-ut].abc-a... See more...
I want to extract all the key value pairs from this event  dynamically Can someone help with the query   INFO 2024-04-29 16:30:08,456 [[MuleRuntime].uber.05: [ct-fin-abc-apps-papi-v1-uw2-ut].abc-apps-schedular-main-flow.BLOCKING @68f82333] com.sfdc.it.ei.mule4.eilog.EILog: {"worker":"0","region":"us-ne-2","applicationName":"ct-fin-abc-apps-papi-v1-uw2-ut","applicationVersion":"1.0.7","applicationType":"PAPI","environment":"ct-app-UAT","domain":"CR C4E","x-transaction-id":"xxxx-e691-xx-91bf-xxx","tx.flow":"read-input-files-sub-flow","tx.fileName":"implementation.xml","txlineNumber":"71","stage":"MILESTONE","status":"SUCCESS","endpointSystem":"","jsonRecord":"{\n \"Task Name\": \"Cash Apps PAPI\",\n \"Action Name\": \"Read Input Files GDrive Start\",\n \"Run Reference\": \"xx-0645-11ef-xx-xx\",\n \"Record Id Type\": \"Invoice\"\n}","detailText":"Start Reading Input Files from G drive","businessRecordId":"","businessRecordType":"","batchSize":"0","totalRecords":"0","remainingRetries":"0","timestamp":"2024-04-29 16:30:08.455","threadName":"[MuleRuntime].uber.05: [ct-fin-aps-apps-papi-v1-uw2-ut].abc-apps-schedular-main-flow.BLOCKING @68f82333"}
Set your dropdown to this dynamic data source | makeresults | eval day=mvrange(0,7) | mvexpand day | eval day=relative_time(now(),"@d-".(1+day)."d") | eval suffix=tonumber(trim(strftime(day, "%e")))... See more...
Set your dropdown to this dynamic data source | makeresults | eval day=mvrange(0,7) | mvexpand day | eval day=relative_time(now(),"@d-".(1+day)."d") | eval suffix=tonumber(trim(strftime(day, "%e"))) | eval suffix=case(suffix%10 == 1, "st", suffix%10 == 2, "nd", suffix%10 == 3, "rd", true(), "th") | eval day=strftime(day,"%e").suffix." ".strftime(day,"%B") | table day
HI all, I just installed the security essential app on my splunk but i'm having issues retrieving the MITRE matrix. I get the following error: External search command 'mitremap' returned error code... See more...
HI all, I just installed the security essential app on my splunk but i'm having issues retrieving the MITRE matrix. I get the following error: External search command 'mitremap' returned error code 1. Script output = "Error! "{""status"": ""ERROR"", ""description"": ""Error occurred reading enterprise-attack.json"", ""message"": ""'objects'""}" " This error occurs both in the default dashboard for MITRE Framework but also if i try to use the command | mitremap  in the search. Does anyone have any suggestion to solve this? Thank you in advance!
Good morning, When a Merakis alert comes from the IDS module, it does not appear which device is reporting the alert. If Client have a lot of Merakis and organizations it's so difficult identify th... See more...
Good morning, When a Merakis alert comes from the IDS module, it does not appear which device is reporting the alert. If Client have a lot of Merakis and organizations it's so difficult identify the device involved and is huge waste of time for the analysts. We think the problem is on API call against IDS module. In other modules the call add the request of the device name but when is for IDS module not it is. Any solution? Splunk Add-on for Cisco Meraki 
... for now I'm continuing with the separate stats queries, it seems easier to keep it as simple stats tables referring to the base search.   Cheers
Hi folks, thank you for the quick replies and the great suggestions.   @gcusello I like the append solution, this is great, however, like you mention it isn't the most efficient, especially if cons... See more...
Hi folks, thank you for the quick replies and the great suggestions.   @gcusello I like the append solution, this is great, however, like you mention it isn't the most efficient, especially if considering adding a third or even fourth row. I did consider expanding the dashboard base search in the background, creating another search id, but again, not the most efficient.   @bowesmana Not tried this way yet, will give it a go today. Will this way efficiently handle varying time windows from the time picker? Not sure how multiplying the average per minute by 60 to get an hour would be effective, perhaps it will, I can compare the different result sets I get.   As you've mentioned there's a ton of ways to achieve results in Splunk, I'm relatively new to it but it's been invaluable in analysing API xml and json logs. Having a lot of fun! Thanks again.
I want to customize the Splunk studio dashboard in such a way that it shows last 7 days (each day) separately.  The requirement is only one dashboard. not globally.  Today's date is 2nd May 2024. ... See more...
I want to customize the Splunk studio dashboard in such a way that it shows last 7 days (each day) separately.  The requirement is only one dashboard. not globally.  Today's date is 2nd May 2024. Now I want to showcase historical day here for last 7 days separately. I want options like below in the presets so that if they select that day then users would see that day's data. The historical dates should change dynamically. 1st May 31st April 30th April 29th April 28th April 27th April 26th April
Hello, I have a requirement to monitor the Oracle database ASM shared disks/volumes space . I didn't find the capabilities of the database collector hardware monitor. I think the answer is "as it... See more...
Hello, I have a requirement to monitor the Oracle database ASM shared disks/volumes space . I didn't find the capabilities of the database collector hardware monitor. I think the answer is "as it is further part of the machine agent capabilities, not database hardware monitoring capabilities" Also, I have another question, where do these metrics come from? and this question for every agent?&nbsp; BR Abdulrahman Kazamel