All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All, I am using case statement to map values instead of other values. But i am not getting the values.I am getting UNknown values. BucketFolder values is like: inbound/concur   |rename bucke... See more...
Hi All, I am using case statement to map values instead of other values. But i am not getting the values.I am getting UNknown values. BucketFolder values is like: inbound/concur   |rename bucketFolder as BucketFolder| eval InterfaceName=case(BucketFolder="%inbound%epm%","EPM", BucketFolder="%inbound%KPIs%","APEX_File_Upload", BucketFolder="%inbound%concur%","ConcurFile_Upload ",true(),"Unknown")| stats values(InterfaceName) as InterfaceName min(timestamp) as Timestamp values(BucketFolder) as BucketFolder values(Status) as Status by correlationId | table Status InterfaceName Timestamp FileName Bucket BucketFolder correlationId  
Hi Team, Our Splunk Search heads are hosted in Cloud and managed by Support and currently we are running with the latest version (9.1.2308.203).  This relates to the Max Lines configuration within ... See more...
Hi Team, Our Splunk Search heads are hosted in Cloud and managed by Support and currently we are running with the latest version (9.1.2308.203).  This relates to the Max Lines configuration within the Format segment of the Search and Reporting App. Previously, Splunk defaulted to displaying 20 or more lines in search results within the Search and Reporting App. As an administrator responsible for extracting Splunk logs across various applications over the years, I never found the need to expand concise search results to read all lines. However, in recent weeks, perhaps following an upgrade of the Splunk Search heads, I've noticed that each time I open a new Splunk search window or the existing Splunk tab times out and auto-refreshes, the Format > Max Lines option resets to 5. As a result, I consistently have to adjust it after nearly every search, which has become cumbersome. Therefore, kindly provide guidance on changing the default value from 5 to 20 in the Search and Reporting App on Adhoc & ES Search heads. This adjustment would ease the inconvenience experienced by numerous customers and end-users who currently find it troublesome to customize it for each search.   The file is ui-prefs.conf, so I've filed a case with support to address this issue. Unfortunately, support wasn't able to make the necessary changes at the backend and suggested that I create a custom app and deploy it in the app upload section. Consequently, I created a custom app, deployed it, and it successfully passed the vetting process. Afterward, I restarted the Search head, but the changes didn't take effect. Upon reaching out to support again, they were unable to provide a solution for the issue. Therefore, I require assistance in resolving this matter. So refer the screenshot of the app which I have deployed for reference. Created a app as below: MaxLines_Values folder. Inside MaxLines_Value folder there would be default and metadata folder as mentioned in screenshot. So kindly help on the same.  
Hi Community, I have this global setting in inputs.conf:   [http] enableSSL = 1 port = 8088 requireClientCert = false serverCert = $SPLUNK_HOME/etc/auth/my_certificates/certificate.cer     I ha... See more...
Hi Community, I have this global setting in inputs.conf:   [http] enableSSL = 1 port = 8088 requireClientCert = false serverCert = $SPLUNK_HOME/etc/auth/my_certificates/certificate.cer     I have 2 [token_name] stanzas configured working fine but now I have the need to use a different server certificate for one stanza. So I'd like to do something like this:   [http://stanza1] token = token1 index = index1 sourcetype = sourcetype1 [http://stanza2] token = token2 index = index2 sourcetype = sourcetype2 serverCert = $SPLUNK_HOME/etc/auth/my_certificates/certificate_2.cer     I'm not sure it is possible though since in the doc it is written that per-token settings are only these: connection_host disabled index indexes persistentQueueSize source queueSize sourcetype token   Any hint?   Thanks, Marta  
I need to connect data from a third party application via HEC to Splunk. It sends data in this format 1 event per request: { "field1":"value", "field2":"value" } After looking at the documentation ... See more...
I need to connect data from a third party application via HEC to Splunk. It sends data in this format 1 event per request: { "field1":"value", "field2":"value" } After looking at the documentation for HEC, I discovered that for events to work correctly, they must have the following format: { "event":{ "field1":"value", "field2":"value" } }  Otherwise I receive an error: {"text":"No data","code":5} I don't have the ability to change the event format on the third-party application side. How can this problem be solved?
I want to extract all the key value pairs from this event  dynamically Can someone help with the query   INFO 2024-04-29 16:30:08,456 [[MuleRuntime].uber.05: [ct-fin-abc-apps-papi-v1-uw2-ut].abc-a... See more...
I want to extract all the key value pairs from this event  dynamically Can someone help with the query   INFO 2024-04-29 16:30:08,456 [[MuleRuntime].uber.05: [ct-fin-abc-apps-papi-v1-uw2-ut].abc-apps-schedular-main-flow.BLOCKING @68f82333] com.sfdc.it.ei.mule4.eilog.EILog: {"worker":"0","region":"us-ne-2","applicationName":"ct-fin-abc-apps-papi-v1-uw2-ut","applicationVersion":"1.0.7","applicationType":"PAPI","environment":"ct-app-UAT","domain":"CR C4E","x-transaction-id":"xxxx-e691-xx-91bf-xxx","tx.flow":"read-input-files-sub-flow","tx.fileName":"implementation.xml","txlineNumber":"71","stage":"MILESTONE","status":"SUCCESS","endpointSystem":"","jsonRecord":"{\n \"Task Name\": \"Cash Apps PAPI\",\n \"Action Name\": \"Read Input Files GDrive Start\",\n \"Run Reference\": \"xx-0645-11ef-xx-xx\",\n \"Record Id Type\": \"Invoice\"\n}","detailText":"Start Reading Input Files from G drive","businessRecordId":"","businessRecordType":"","batchSize":"0","totalRecords":"0","remainingRetries":"0","timestamp":"2024-04-29 16:30:08.455","threadName":"[MuleRuntime].uber.05: [ct-fin-aps-apps-papi-v1-uw2-ut].abc-apps-schedular-main-flow.BLOCKING @68f82333"}
HI all, I just installed the security essential app on my splunk but i'm having issues retrieving the MITRE matrix. I get the following error: External search command 'mitremap' returned error code... See more...
HI all, I just installed the security essential app on my splunk but i'm having issues retrieving the MITRE matrix. I get the following error: External search command 'mitremap' returned error code 1. Script output = "Error! "{""status"": ""ERROR"", ""description"": ""Error occurred reading enterprise-attack.json"", ""message"": ""'objects'""}" " This error occurs both in the default dashboard for MITRE Framework but also if i try to use the command | mitremap  in the search. Does anyone have any suggestion to solve this? Thank you in advance!
Good morning, When a Merakis alert comes from the IDS module, it does not appear which device is reporting the alert. If Client have a lot of Merakis and organizations it's so difficult identify th... See more...
Good morning, When a Merakis alert comes from the IDS module, it does not appear which device is reporting the alert. If Client have a lot of Merakis and organizations it's so difficult identify the device involved and is huge waste of time for the analysts. We think the problem is on API call against IDS module. In other modules the call add the request of the device name but when is for IDS module not it is. Any solution? Splunk Add-on for Cisco Meraki 
I want to customize the Splunk studio dashboard in such a way that it shows last 7 days (each day) separately.  The requirement is only one dashboard. not globally.  Today's date is 2nd May 2024. ... See more...
I want to customize the Splunk studio dashboard in such a way that it shows last 7 days (each day) separately.  The requirement is only one dashboard. not globally.  Today's date is 2nd May 2024. Now I want to showcase historical day here for last 7 days separately. I want options like below in the presets so that if they select that day then users would see that day's data. The historical dates should change dynamically. 1st May 31st April 30th April 29th April 28th April 27th April 26th April
Hello, I have a requirement to monitor the Oracle database ASM shared disks/volumes space . I didn't find the capabilities of the database collector hardware monitor. I think the answer is "as it... See more...
Hello, I have a requirement to monitor the Oracle database ASM shared disks/volumes space . I didn't find the capabilities of the database collector hardware monitor. I think the answer is "as it is further part of the machine agent capabilities, not database hardware monitoring capabilities" Also, I have another question, where do these metrics come from? and this question for every agent?  BR Abdulrahman Kazamel
Hi All, I have setup new deployment server and new heavy forwarder. There is successful phonehome connection when I check with command "./splunk list deploy-clients". The client is successfully co... See more...
Hi All, I have setup new deployment server and new heavy forwarder. There is successful phonehome connection when I check with command "./splunk list deploy-clients". The client is successfully connecting the server. I want to push new app to this new heavy forwarder. But the app is not getting pushed from Deployment server. I verified that  the app is under deployment_apps directory. I also checked serverclass.conf file.Both are looking good. What is the reason that app is not getting written ?  Do I need to first create a app in HF manually so that DS finds the app and pushes the changes ?  Regards, PNV
i have a problem in the indexer cluster master  i got error from 1 week ago which is red color saying there is a data durability .     and this photo for indexer clustring from the cluster ma... See more...
i have a problem in the indexer cluster master  i got error from 1 week ago which is red color saying there is a data durability .     and this photo for indexer clustring from the cluster master   and this from inside 1 index    any help ?
We have a load balancer sitting in front of our search head cluster that is reverse proxying the connection to the search heads over https port 443. The search head web interfaces are running on port... See more...
We have a load balancer sitting in front of our search head cluster that is reverse proxying the connection to the search heads over https port 443. The search head web interfaces are running on port 8000. The issue is when our search heads send out alert emails they append 8000 to the load balancer url which doesn't work because the load balancer is listening on 443. Is there a way to tell the search heads to leave off the port or specify a different port explicitly in the alert emails?
Description How can I produce a URL in an alert email that uses field values, either by in-line results or in the body of the email. When an alert is triggered an email is sent with field dashboard_... See more...
Description How can I produce a URL in an alert email that uses field values, either by in-line results or in the body of the email. When an alert is triggered an email is sent with field dashboard_url. For projects with no spaces in the name, the URL is clickable. If there is a space, the URL contains only up to the space and is broken. Sample query   | makeresults format=json data="[{\"project\":\"projectA - Team A\"},{\"project\":\"projectB\"}]" | eval dashboard_url="https://internal.com:8000/en-US/app/search/dash?form.q_project=".project.""   Result: https://internal.com:8000/en-US/app/search/dash?form.q_project=projectA - Team A Workarounds attempted I tried building the dashboard_url in the email body using results.project. The same condition occurs, projects with spaces get a broken link.
I have a simple search  index=xxxxx "User ID" and I need the correct syntax to get the actual username in the results. Sample Event INFO xcvxcvxcvxcvxcvxcvxcvxcvxcvxcvvcx - Logged User ID-XXXXXX ... See more...
I have a simple search  index=xxxxx "User ID" and I need the correct syntax to get the actual username in the results. Sample Event INFO xcvxcvxcvxcvxcvxcvxcvxcvxcvxcvvcx - Logged User ID-XXXXXX Now I can easy do a count of how many people logged on but need to report on the XXXXXX I thought about doing index=xxxxx 'User ID" | rex field=_raw "User\/s\ID\/-\(?<username>\d+)" | stats count by username The search is returning the results and just a count but I need to see the username in my stats. I am new to this so please mind the ignorance 
How do i integrate my website hosted on AWS(ec2) with splunk?
Hello Splunk community.  I have been searching for this question quite a lot and went through many articles, but it’s still unclear to me.   Can someone please explain when would we want to us... See more...
Hello Splunk community.  I have been searching for this question quite a lot and went through many articles, but it’s still unclear to me.   Can someone please explain when would we want to use heavy forwarder instead of universal forwarder. I would really appreciate a real use case, when in order to get data into splunk we would want to go with heavy forwarder instead of universal forwarder, and why. Thanks in advance, for spending time replying to my post
I have a summary index that pulls in normalized data from 2 different sources (entirely different applications that catalog and categorize the data differently).  In situations where I have events in... See more...
I have a summary index that pulls in normalized data from 2 different sources (entirely different applications that catalog and categorize the data differently).  In situations where I have events in the summary index from both sources, they are 99.99% of the time duplicates of eachother, however source 1 has better data fidelity than source 2.  Lets say if I weighted High Fidelity source with a 1 and Low fidelity source with a 2, I'm trying to find a way to filter with a by clause on another field which both events have  (like device, or ip_address).  something logically like: |where source=coalesce("sourcename1","sourcename2") by field but where doesnt take a by clause. In the past I've done similar things by coalescing each field I want with a case statement, but in this case there are quite a few and I'm wondering if there's a more efficient way of doing it. any ideas on the best way to accomplish this?  
I have a multi-select like this:     <input token="name" type="multiselect"> <label>Name</label> <choice value="*">ALL</choice> <prefix>(</prefix> <suffix>)</suffix> ... See more...
I have a multi-select like this:     <input token="name" type="multiselect"> <label>Name</label> <choice value="*">ALL</choice> <prefix>(</prefix> <suffix>)</suffix> <valuePrefix>name="</valuePrefix> <valueSuffix>"</valueSuffix> <delimiter> OR </delimiter> <fieldForLabel>name</fieldForLabel> <fieldForValue>name</fieldForValue> <search> <query>index=my_index | dedup name | sort name</query> </search> </input>     It correctly produces a token $name$ with value:     (name="VALUE1" OR name="VALUE2" ... )       But I have a need to make the token look like:     (name="VALUE1" OR name="VALUE1.*" OR name="VALUE2" OR name="VALUE2.*" ... )     because if "VALUE1" is selected in the multi-select, I want events that match both "VALUE1" and "VALUE1.*" (note the dot star, not just star).   But I cannot just match "VALUE1*" as that will bring in events that have a different value which BEGINS with "VALUE1" which I don't want.   So the question is - how can I utilize the values TWICE in the token generation? I can't wrap my head around how I might be able to achieve this.
Hi, I have been developing apps on Splunk SOAR for some time and I have recently encountered App errors that say "Failed to read message from connector: <app_name>" on multiple instances.  This is ... See more...
Hi, I have been developing apps on Splunk SOAR for some time and I have recently encountered App errors that say "Failed to read message from connector: <app_name>" on multiple instances.  This is mostly observed in cases where I am processing responses from a rest call and filtering data and adding the dictionaries to action results.  The data structure looks perfect and compared to working actions in the same app I see no difference in action results.  Also, the Action works fine when tested in App wizard IDE (even for a published app). When tested through a playbook or run manually in a container, I start getting this message again. This is very strange for me as I am stuck on this problem for couple weeks and unable to solve it. I have debugged all data that is mapped to action resulsts results and summary. Also the json file output datapaths are good (have even removed all outputs from json file except default ones to see if they are the issue) I am facing this issues on two totally different apps on different instances. (Instance 1 running on 5.3.5 and instance 2 on 6.0.   Any help is highly appreciated. An example of proceed response from IDE is pasted below for reference.  I am using this app for interacting with an LLM. As you can see the app runs perfectly fine. I see no data missing or any app errors here. {"identifier": "text_prompt", "result_data": [{"data": [{"inputTextTokenCount": 4, "results": [{"tokenCount": 50, "outputText": "\nA traffic jam is a situation where a large number of vehicles are moving at a slower speed than usual, often due to an obstruction or congestion in the road. This can cause delays and frustration for drivers, as they struggle to move through the congest", "completionReason": "LENGTH"}]}], "extra_data": [], "summary": {"output_text": "\nA traffic jam is a situation where a large number of vehicles are moving at a slower speed than usual, often due to an obstruction or congestion in the road. This can cause delays and frustration for drivers, as they struggle to move through the congest", "output_tokens": 50, "input_tokens": 4}, "status": "success", "message": "Output text: \nA traffic jam is a situation where a large number of vehicles are moving at a slower speed than usual, often due to an obstruction or congestion in the road. This can cause delays and frustration for drivers, as they struggle to move through the congest, Output tokens: 50, Input tokens: 4", "parameter": {"prompt_text": "explain traffic jam", "model": "amazon.titan-text-lite-v1", "temperature": 0, "top_p": 1, "max_output_token": 50}, "context": {}}], "result_summary": {"total_objects": 1, "total_objects_successful": 1}, "status": "success", "message": "1 action succeeded", "exception_occured": false, "action_cancelled": false}  
I wrote a simple query to parse my Windows Event Security logs to look for a user account, however I am looking to add onto this and find out which devices the accounts are running on. index="wineve... See more...
I wrote a simple query to parse my Windows Event Security logs to look for a user account, however I am looking to add onto this and find out which devices the accounts are running on. index="wineventlog" source="WinEventLog:Security" user="domainaccount" My end goal is to be able to type in a domain account in my search and find what device its associated with or is running as a service under.