All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@gcusello tried but it didnt work
Hi Community, I have this global setting in inputs.conf:   [http] enableSSL = 1 port = 8088 requireClientCert = false serverCert = $SPLUNK_HOME/etc/auth/my_certificates/certificate.cer     I ha... See more...
Hi Community, I have this global setting in inputs.conf:   [http] enableSSL = 1 port = 8088 requireClientCert = false serverCert = $SPLUNK_HOME/etc/auth/my_certificates/certificate.cer     I have 2 [token_name] stanzas configured working fine but now I have the need to use a different server certificate for one stanza. So I'd like to do something like this:   [http://stanza1] token = token1 index = index1 sourcetype = sourcetype1 [http://stanza2] token = token2 index = index2 sourcetype = sourcetype2 serverCert = $SPLUNK_HOME/etc/auth/my_certificates/certificate_2.cer     I'm not sure it is possible though since in the doc it is written that per-token settings are only these: connection_host disabled index indexes persistentQueueSize source queueSize sourcetype token   Any hint?   Thanks, Marta  
Hi @myte , you could try to schedule the two searches (with the frequency that meets your requirements), separately or together, saving results in a summary index. Then you can run a very efficient... See more...
Hi @myte , you could try to schedule the two searches (with the frequency that meets your requirements), separately or together, saving results in a summary index. Then you can run a very efficient search on the summary index. Ciao. Giuseppe
Hi @kranthimutyala2, did you tried to use the spath command (https://docs.splunk.com/Documentation/SplunkCloud/latest/SearchReference/Spath)? It automatically recognize all the fields-value pairs. ... See more...
Hi @kranthimutyala2, did you tried to use the spath command (https://docs.splunk.com/Documentation/SplunkCloud/latest/SearchReference/Spath)? It automatically recognize all the fields-value pairs. Ciao. Giuseppe
Hello @somesoni2 @andrewtrobec @edoardo_vicendo  we saw same issue and it can wipe your changes on client's local folder. So is that normal that even if we don't have local folder in deployment-apps... See more...
Hello @somesoni2 @andrewtrobec @edoardo_vicendo  we saw same issue and it can wipe your changes on client's local folder. So is that normal that even if we don't have local folder in deployment-apps/app, reloading deployment server creates deployment-apps/app/local + deployment-apps/app/local/app.conf ? We had to leave it, resync correctly then at the end add our specific conf files on the heavyforwarder etc/apps/app/local Thanks.
I need to connect data from a third party application via HEC to Splunk. It sends data in this format 1 event per request: { "field1":"value", "field2":"value" } After looking at the documentation ... See more...
I need to connect data from a third party application via HEC to Splunk. It sends data in this format 1 event per request: { "field1":"value", "field2":"value" } After looking at the documentation for HEC, I discovered that for events to work correctly, they must have the following format: { "event":{ "field1":"value", "field2":"value" } }  Otherwise I receive an error: {"text":"No data","code":5} I don't have the ability to change the event format on the third-party application side. How can this problem be solved?
I want to extract all the key value pairs from this event  dynamically Can someone help with the query   INFO 2024-04-29 16:30:08,456 [[MuleRuntime].uber.05: [ct-fin-abc-apps-papi-v1-uw2-ut].abc-a... See more...
I want to extract all the key value pairs from this event  dynamically Can someone help with the query   INFO 2024-04-29 16:30:08,456 [[MuleRuntime].uber.05: [ct-fin-abc-apps-papi-v1-uw2-ut].abc-apps-schedular-main-flow.BLOCKING @68f82333] com.sfdc.it.ei.mule4.eilog.EILog: {"worker":"0","region":"us-ne-2","applicationName":"ct-fin-abc-apps-papi-v1-uw2-ut","applicationVersion":"1.0.7","applicationType":"PAPI","environment":"ct-app-UAT","domain":"CR C4E","x-transaction-id":"xxxx-e691-xx-91bf-xxx","tx.flow":"read-input-files-sub-flow","tx.fileName":"implementation.xml","txlineNumber":"71","stage":"MILESTONE","status":"SUCCESS","endpointSystem":"","jsonRecord":"{\n \"Task Name\": \"Cash Apps PAPI\",\n \"Action Name\": \"Read Input Files GDrive Start\",\n \"Run Reference\": \"xx-0645-11ef-xx-xx\",\n \"Record Id Type\": \"Invoice\"\n}","detailText":"Start Reading Input Files from G drive","businessRecordId":"","businessRecordType":"","batchSize":"0","totalRecords":"0","remainingRetries":"0","timestamp":"2024-04-29 16:30:08.455","threadName":"[MuleRuntime].uber.05: [ct-fin-aps-apps-papi-v1-uw2-ut].abc-apps-schedular-main-flow.BLOCKING @68f82333"}
Set your dropdown to this dynamic data source | makeresults | eval day=mvrange(0,7) | mvexpand day | eval day=relative_time(now(),"@d-".(1+day)."d") | eval suffix=tonumber(trim(strftime(day, "%e")))... See more...
Set your dropdown to this dynamic data source | makeresults | eval day=mvrange(0,7) | mvexpand day | eval day=relative_time(now(),"@d-".(1+day)."d") | eval suffix=tonumber(trim(strftime(day, "%e"))) | eval suffix=case(suffix%10 == 1, "st", suffix%10 == 2, "nd", suffix%10 == 3, "rd", true(), "th") | eval day=strftime(day,"%e").suffix." ".strftime(day,"%B") | table day
HI all, I just installed the security essential app on my splunk but i'm having issues retrieving the MITRE matrix. I get the following error: External search command 'mitremap' returned error code... See more...
HI all, I just installed the security essential app on my splunk but i'm having issues retrieving the MITRE matrix. I get the following error: External search command 'mitremap' returned error code 1. Script output = "Error! "{""status"": ""ERROR"", ""description"": ""Error occurred reading enterprise-attack.json"", ""message"": ""'objects'""}" " This error occurs both in the default dashboard for MITRE Framework but also if i try to use the command | mitremap  in the search. Does anyone have any suggestion to solve this? Thank you in advance!
Good morning, When a Merakis alert comes from the IDS module, it does not appear which device is reporting the alert. If Client have a lot of Merakis and organizations it's so difficult identify th... See more...
Good morning, When a Merakis alert comes from the IDS module, it does not appear which device is reporting the alert. If Client have a lot of Merakis and organizations it's so difficult identify the device involved and is huge waste of time for the analysts. We think the problem is on API call against IDS module. In other modules the call add the request of the device name but when is for IDS module not it is. Any solution? Splunk Add-on for Cisco Meraki 
... for now I'm continuing with the separate stats queries, it seems easier to keep it as simple stats tables referring to the base search.   Cheers
Hi folks, thank you for the quick replies and the great suggestions.   @gcusello I like the append solution, this is great, however, like you mention it isn't the most efficient, especially if cons... See more...
Hi folks, thank you for the quick replies and the great suggestions.   @gcusello I like the append solution, this is great, however, like you mention it isn't the most efficient, especially if considering adding a third or even fourth row. I did consider expanding the dashboard base search in the background, creating another search id, but again, not the most efficient.   @bowesmana Not tried this way yet, will give it a go today. Will this way efficiently handle varying time windows from the time picker? Not sure how multiplying the average per minute by 60 to get an hour would be effective, perhaps it will, I can compare the different result sets I get.   As you've mentioned there's a ton of ways to achieve results in Splunk, I'm relatively new to it but it's been invaluable in analysing API xml and json logs. Having a lot of fun! Thanks again.
I want to customize the Splunk studio dashboard in such a way that it shows last 7 days (each day) separately.  The requirement is only one dashboard. not globally.  Today's date is 2nd May 2024. ... See more...
I want to customize the Splunk studio dashboard in such a way that it shows last 7 days (each day) separately.  The requirement is only one dashboard. not globally.  Today's date is 2nd May 2024. Now I want to showcase historical day here for last 7 days separately. I want options like below in the presets so that if they select that day then users would see that day's data. The historical dates should change dynamically. 1st May 31st April 30th April 29th April 28th April 27th April 26th April
Hello, I have a requirement to monitor the Oracle database ASM shared disks/volumes space . I didn't find the capabilities of the database collector hardware monitor. I think the answer is "as it... See more...
Hello, I have a requirement to monitor the Oracle database ASM shared disks/volumes space . I didn't find the capabilities of the database collector hardware monitor. I think the answer is "as it is further part of the machine agent capabilities, not database hardware monitoring capabilities" Also, I have another question, where do these metrics come from? and this question for every agent?  BR Abdulrahman Kazamel
Try this input_stanza_name=helper.get_input_stanza_names()  
Providing there are no issues, a rolling restart is OK to perform. Its best to do this when it's least busy or have maintaince Window for your BAU operations. A rolling restart performs a phased res... See more...
Providing there are no issues, a rolling restart is OK to perform. Its best to do this when it's least busy or have maintaince Window for your BAU operations. A rolling restart performs a phased restart of all peer nodes, so that the indexer cluster as a whole can continue to perform its function during the restart process and data should be sent to the other indexers, whilst one is being restarted. There a number of checks it perfoms so can take a while which depends on your architecture. First check the status, you can use the manager GUI or CLI /opt/splunk/bin/splunk show cluster-status --verbose Restart from the GUI or use the CLI /opt/splunk/bin/splunk rolling-restart cluster-peers
Hi @Poojitha, you can see in the Deployment server GUI that there's an option to flag that requires a splunk restart on the client when an app is updated, this is mandatory to reload the configurati... See more...
Hi @Poojitha, you can see in the Deployment server GUI that there's an option to flag that requires a splunk restart on the client when an app is updated, this is mandatory to reload the configurations on the client. Ciao. Giuseppe
@gcusello  Thanks for your response. I checked the first point you mentioned. The app which I want to push has splunk user permission on it. The HF is set to run as splunk as well. So, its looking... See more...
@gcusello  Thanks for your response. I checked the first point you mentioned. The app which I want to push has splunk user permission on it. The HF is set to run as splunk as well. So, its looking fine here. What do you mean by "flagged the retart Splunk flag for that app." ? I didnot get this. Please can you explain more. I am deploying custom app. Regards, PNV
Hi @Poojitha, at first check if the owner of the app is splunk: if it's root and you are using splunk user to run splunk serveces in the HF you could have problems. Then check if yu flagged the ret... See more...
Hi @Poojitha, at first check if the owner of the app is splunk: if it's root and you are using splunk user to run splunk serveces in the HF you could have problems. Then check if yu flagged the retart Splunk flag for that app. At least check again (even if you already checked) the serverclass: check if the server is inserted in the serverclass. Last check, what kind of app are you depoying, a custom or a splanbase app? Ciao. Giuseppe
Hi All, I have setup new deployment server and new heavy forwarder. There is successful phonehome connection when I check with command "./splunk list deploy-clients". The client is successfully co... See more...
Hi All, I have setup new deployment server and new heavy forwarder. There is successful phonehome connection when I check with command "./splunk list deploy-clients". The client is successfully connecting the server. I want to push new app to this new heavy forwarder. But the app is not getting pushed from Deployment server. I verified that  the app is under deployment_apps directory. I also checked serverclass.conf file.Both are looking good. What is the reason that app is not getting written ?  Do I need to first create a app in HF manually so that DS finds the app and pushes the changes ?  Regards, PNV