All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Looking to create a dashboard to allow users to lookup usernames, information, and groups within the Active Directory data. How do I create a search for this?
Hi, i have a question on Authenticating to IDX Cluster Peer via REST. We have the following Environment: 3 IDX in Cluster 3 SH in Cluster 1 CM (License Manager, IDX Cluster Manager, Deployer &... See more...
Hi, i have a question on Authenticating to IDX Cluster Peer via REST. We have the following Environment: 3 IDX in Cluster 3 SH in Cluster 1 CM (License Manager, IDX Cluster Manager, Deployer & Deploymentserver) Our normal Authentication for Web is currently with LDAP. With my LDAP-User i can directly perform a GET request to an Indexer, but with a local User created over WebUI (tried local user in SHC and on CM) i cant perform any request to an indexer.  The WebUI is disabled on the Indexers and they dont have the LDAP Configuration as the Searchheads does. How does it come, that the Indexer know my LDAP User but not the locally created? And how can i let the indexers to get to know a locally on SH or CM created user?
Hello, I have 500 HTTP messages in my access log. Also I have corresponding events from other log sources with the same correlation-id. Now I want to join the information to enhance the results.   ... See more...
Hello, I have 500 HTTP messages in my access log. Also I have corresponding events from other log sources with the same correlation-id. Now I want to join the information to enhance the results.   Access Log Events:   2024-04-25T11:00:26+00:00 [info] type=access status=500 xCorrelationId=90e2a321-f522-466f-9ffa-72cbdaa1a576 .... 2024-04-25T10:15:25+00:00 [info] type=access status=500 xCorrelationId=9b1833f5-776b-44c3-92d7-d603abdfecf8 ...   Other Events:   2024-04-25T10:15:24+00:00 xCorrelationId=9b1833f5-776b-44c3-92d7-d603abdfecf8 NoHandlerFoundException: No endpoint GET     My actual intention is, to exclude the results from main search, if there is another event with the same correlation-id but containing specific exceptions like "NoHandlerFoundException". That means, i need a search per result from the main search. Do you know a solution for this? Thanks!
I'm regularly seeing a warning triangle appear, who to I search to fine our what is causing this 
I have 2 eventa from a mainframe running z/OS (not sure that affects things): 1.{"MFSOURCETYPE":"SYSLOG","DATETIME":"2024-04-24 13:35:18.05 +0100","SYSLOGSYSTEMNAME":"A090","JOBID":"STC15694","JOBNA... See more...
I have 2 eventa from a mainframe running z/OS (not sure that affects things): 1.{"MFSOURCETYPE":"SYSLOG","DATETIME":"2024-04-24 13:35:18.05 +0100","SYSLOGSYSTEMNAME":"A090","JOBID":"STC15694","JOBNAME":"RDSONLVP","SYSPLEX":"UKPPLX01","ACTION":"INFORMATIONAL","MSGNUM":"IEF234E","MSGTXT":"IEF234E K 449F,JE5207,PVT,RDSONLVP,RDSONLVP","MSGREQTYPE":""} 2. {"MFSOURCETYPE":"SYSLOG","DATETIME":"2024-04-24 13:34:47.92 +0100","SYSLOGSYSTEMNAME":"A090","JOBID":"STC15694","JOBNAME":"RDSONLVP","SYSPLEX":"UKPPLX01","CONSOLE":"INTERNAL","ACTION":"INFORMATIONAL","MSGNUM":"IEC147I","MSGTXT":"IEC147I 613-04,IFG0195B,RDSONLVP,RDSONLVP,IIII4004,449F,JE5207,\nRDS.VPLS.PDLY0001.PFDRL.U142530.E240220\x9C\n \x80\x80","MSGREQTYPE":""}    for event 1, everything works as it should. For event 2, the MSGTXT field is coming up blank: I thought that the MDSGTCT field might be populated and just not displaying becasue of the control characters (the mainframe doesn't use these, so not sure where they are coming from) but running rex against MSGTXT or substr still gives me nothing.  Adding the search command: rex "MSGTXT(?<msgtext>.+):" does create a msgtext field with the MSGTXT plus a few more characters : ":"IEC147I 613-04,IFG0195B,RDSONLVP,RDSONLVP,IIII4004,449F,JE5207,\nRDS.VPLS.PDLY0001.PFDRL.U142530.E240220\x9C\n \x80\x80","MSGREQTYPE"            , so the data is in the event to be extracted.  I can work with this to extract the comma-deliminated field that I actually want, but it's a pain having to prcess this particula MSGNUM (IEC147I) differently. Any suggestions as to how to go about getting htese events parsed correctly? Thanks, Steve            
Hi ,   I have set-up a scheduled pdf report to be emailed of a complex dashboard with several graphs, unfortunately I keep getting a timeout error The dashboard takes about 2 minutes to display. A... See more...
Hi ,   I have set-up a scheduled pdf report to be emailed of a complex dashboard with several graphs, unfortunately I keep getting a timeout error The dashboard takes about 2 minutes to display. A simple dashboard with a single graph works perfectly so I'm sure the rest of the config is okay. What I did notice however is that I receive the email, with the following error about a minute after the report was scheduled: Scheduled view delivery. An error occurred while generating the PDF. Please see python.log for details.   Please help.
Editing to make it better: Let's say I have login events with 2 important fields: past_deviceid, new_deviceid I want to check if the new_deviceid was assigned to a different user in the past, for t... See more...
Editing to make it better: Let's say I have login events with 2 important fields: past_deviceid, new_deviceid I want to check if the new_deviceid was assigned to a different user in the past, for that I need to compare the value of the field to the past_deviceid field of past events and I'm kinda stuck here In login events where the user uses their usual device, there'll be only 1 field called past_deviceid, we get the new_deviceid field only when there's a login with a new device In the end I want to have a table that shows the new_deviceid by all the users that hold/held it where there's more than 1 user Example: events with only 1 device: User: Josh old_Device: iPhone12348 --------------------------- User: John old_Device: samsung165 ---------------------------- case where there's a new device: User: Jane old_Device: iPhone17778 new_Device: samsung165   I want to have the following table, I guess the stats command fits here: DeviceID User samsung165 Jane John
I have an architecture with a single SH and two indexers. I've installed the Splunk for Microsoft 365 add-on on the search head, so the collected logs are stored in the search head's index, but I wan... See more...
I have an architecture with a single SH and two indexers. I've installed the Splunk for Microsoft 365 add-on on the search head, so the collected logs are stored in the search head's index, but I want them to be stored on the indexers.  Here are two other solutions : - Either I continue with the initial setup and select only one indexer amont the two to be the storage location for both the search head's data and the add-on. - Or, I set up a new instance for the heavy forwarder on which I install the add-on, and I configure it to forward the indexes to the Indexer. Which Solution is the best in my case ?   
Hello all,  I am using SplunkCloud I have looking on the forum yesterday in order to create an alert when an Event is not detected.  My idea is to send a mail when the Event 4776 is not detected. ... See more...
Hello all,  I am using SplunkCloud I have looking on the forum yesterday in order to create an alert when an Event is not detected.  My idea is to send a mail when the Event 4776 is not detected.  The closer I have is this :  index ="*" | where ComputerName="ComputerName" | search EventCode=4776 This gives me every event 4776 on the device ComputerName I wanted to add  earliest=-2m@m latest=-1m@m like I saw on different places but the result goes to 0 while I know this event is sent multiple times per second (multiple like 100 times)   Second question, when I save as an Alert, I specify : Real Time,  Trigger when Specified : search count =0  Is this right ? I saw people saying results=0 but I have this error : Cannot parse alert condition. Unknown search command 'results'.. Thanks for the help        
I have a time picker & a time dropdown which has static values.   <panel id="pqr"> <input type="time" token="time"> <label>DateTime</label> <default> <earliest>@d</earliest> <latest>now</latest> </... See more...
I have a time picker & a time dropdown which has static values.   <panel id="pqr"> <input type="time" token="time"> <label>DateTime</label> <default> <earliest>@d</earliest> <latest>now</latest> </default> </input> </panel> <panel id="abc"> <input type="dropdown" token="timedrop"> <label>Time Dropdown</label> <choice value="now">Now</choice> <choice value="+3d">3d</choice> <choice value="+4d">4d</choice> <choice value="+5d">5d</choice> <default>now</default> <change> <eval token="latest_Time">if('timedrop'="now",now(),relative_time(if($time.latest$="now",now(),$time.latest$), $timedrop$))</eval> </change> </input> </panel> The expectation is if Now is selected in timedrop, the data till now should load. If +3d is selected in timedrop, then +3d should be added with the time.latest token (coming from the time picker) and so on. On load (by default Today is selected), the latest_Time is returning NAN, but if I select a specific time range (say 8th April 10AM-11AM) & timedrop as 3d, it is working as expected.
I am really struggling to add my macos data into splunk just like how we can upload the event logs of windows. is there any add-ons that i can install to help me do this? if there is, can anyone expl... See more...
I am really struggling to add my macos data into splunk just like how we can upload the event logs of windows. is there any add-ons that i can install to help me do this? if there is, can anyone explain how to configure it and make it work? 
Hi,  Below is the dashboard query which works fine for EC2 Port Probe events but rest of the events are not displayed in the dashlet. when we check open in search option, we find events in the event... See more...
Hi,  Below is the dashboard query which works fine for EC2 Port Probe events but rest of the events are not displayed in the dashlet. when we check open in search option, we find events in the event column and not in statistics after changing the mode from fast to verbose. please help here. index="aws_generic" source="aws.guardduty" detail.type=Discovery:S3/AnomalousBehavior* | eval newtime=strftime(_time,"%m/%d/%y %H:%M:%S") | rex field=host (?<service>.*):(?<cloudprovider>.*):(?<region>.*):(?<cluster>.*):(?<role>.*):(?<stagingarea>.*) | stats sparkline(count) as history max(newtime) as "event time" by stagingarea detail.region detail.type detail.severity detail.description detail.accountId detail.id | eval times=mvindex(times, 0, 2) | sort - "event time" detail.severity | table "event time","detail.accountId","detail.region","detail.severity","history","detail.type","detail.description" | rename "event time" as "Event Time","detail.accountId" as "AWS Account ID","detail.region" as "AWS Region","detail.type" as "Finding Type","detail.severity" as "Severity","history" as "Event History","detail.description" as "Description"
Hi, I am calculating the difference between two search results  as below. And, sometime the panel takes bit time to return the results, thus the variance is showing false count. Please could you ... See more...
Hi, I am calculating the difference between two search results  as below. And, sometime the panel takes bit time to return the results, thus the variance is showing false count. Please could you suggest ? how to fix Thanks in advance. SPL: | makeresults | eval variance=$MA:result.macoscount$ - $COSMOS:result.cosmacount$ | table variance Issue: middle panel (with blue color) result is "MA to COSMOS value "- COSMOS to P.H.B"  
My environment just moved to JSM for monitoring and solving alerts, and we since have lost a functionality where we could link back to the Splunk Search the alert originated from, when an alert is tr... See more...
My environment just moved to JSM for monitoring and solving alerts, and we since have lost a functionality where we could link back to the Splunk Search the alert originated from, when an alert is triggered and sent to the alert center. I wonder if there is a way to do this with this add-on?
I have a sc4s deployment running in an ec2 instance. I followed the documentation provided here https://splunk.github.io/splunk-connect-for-syslog/main/.  I have a c# application running inside dock... See more...
I have a sc4s deployment running in an ec2 instance. I followed the documentation provided here https://splunk.github.io/splunk-connect-for-syslog/main/.  I have a c# application running inside docker of the same host where sc4s is running. My application is able to send syslog data on port 514 and the same is visible in Splunk Cloud dashboard under sourcetype as sc4s:fallback I am running the same application in my windows local machine trying to send data to the same port and linux machine ip. Data is sent to the host machine because I can see it in the TCP dump but sc4s is not ingesting the data into the Splunk Cloud.   What should be my next step in debugging. I have tried everything from my side but still not able to figure out what the issue is my sc4s deployment
Hi, on a dashboard I have a simple check box element: LastOne_tkn (token name). If the check box is enabled, the LastOne_tkn=TRUE. There is simple small table view, which shows some results. I would... See more...
Hi, on a dashboard I have a simple check box element: LastOne_tkn (token name). If the check box is enabled, the LastOne_tkn=TRUE. There is simple small table view, which shows some results. I would like to run query in that table view based on LastOne_tkn condition. LastOne_tkn=TRUE (dedup activated) index=machinedata | dedup Attr1 | table Attr1, Attr2 LastOne_tkn=otherwise (dedup deactivated) index=machinedata | table Attr1, Attr2 Any idea, please?
Hello, We are trying to configure the authentication extensions for the Okta identity provider and below are the steps as per the Splunk documentation.Log into Splunk Platform as an administrator lev... See more...
Hello, We are trying to configure the authentication extensions for the Okta identity provider and below are the steps as per the Splunk documentation.Log into Splunk Platform as an administrator level user. From the system bar, click Settings > Authentication Methods. Click "Configure Splunk to use SAML". The "SAML configuration" dialog box appears. In the Script path field within the Authentication Extensions section of the "SAML configuration" dialog box , type in SAML_script_okta.py. In the Script timeout field, type in 300s. In the Get User Info time-to-live field, type in 3600s. Click the Script functions field. In the pop-up window that appears, click getUserInfo. Under Script Secure Arguments, click Add Input. In the Key field, type in apiKey. In the Value field, type in the API key for your IdP. Click "Add input" again. In the "Key" field, type in baseUrl. in the "Value" field, type in the URL of your Okta instance. Click Save. Splunk Cloud Platform saves the Okta configuration and returns you to the SAML Groups page.   Could anyone confirm whether these steps will work for the Splunk OnPrem too? or it is applicable for the Splunk Cloud?    Also, as per Step (In the Value field, type in the API key for your IdP.), we have to provide the API key for the Idp, our security team is asking what permission does the Okta API token needs? any thoughts? Please advice.    Thank you!    
We have a dashboard, where we want to add few hosts in a drop down.  I tried using single host in a drop down its working, but when we add multiple hosts it showing syntax error(invalid attribute.) ... See more...
We have a dashboard, where we want to add few hosts in a drop down.  I tried using single host in a drop down its working, but when we add multiple hosts it showing syntax error(invalid attribute.) DEV amptams.dev.com ampvitss.dev.com ampdoctrc.dev.com SIT ampastdmsg.dev.com ampmorce.dev.com ampsmls.dev.com UAT ampserv.dev.com ampasoomsg.dev.com SYS ampmsdser.dev.com ampastcol.dev.com   Dashboard xml       <form version="1.1" theme="light"> <label>Dashboard</label> <fieldset submitButton="false"> <input type="time" token="timepicker"> <label>TimeRange</label> <default> <earliest>-15m@m</earliest> <latest>now</latest> </default> </input> <input type="dropdown" token="Server"> <label>Env wise hosts</label> <choice value="amptams.dev.com">ENVINORMENT-DEV</choice> <choice value="ampastdmsg.dev.com">ENVINORMENT-SIT</choice> <choice value="ampserv.dev.com">ENVINORMENT-UAT</choice> <choice value="ampmsdser.dev.com">ENVINORMENT-SYS</choice>> </fieldset> <row> <panel> <table> <title>Incoming Count &amp; Total Count</title> <search> <query>index=app-index source=application.logs $Server$ |rex field= _raw "application :\s(?<Application>\w+)" | rex field= _raw "(?<Msgs>Initial message received with below details|Letter published correctley to ATM subject|Letter published correctley to DMM subject|Letter rejected due to: DOUBLE_KEY|Letter rejected due to: UNVALID_LOG|Letter rejected due to: UNVALID_DATA_APP)" |chart count over Application by Msgs |rename "Initial message received with below details" as Income, "Letter published correctley to ATM subject" as ATM, "Letter published correctley to DMM subject" as DMM, "Letter rejected due to: DOUBLE_KEY" as Reject, "Letter rejected due to: UNVALID_LOG" as Rej_log, "Letter rejected due to: UNVALID_DATA_APP" as Rej_app |table Income Rej_app ATM DMM Reject Rej_log Rej_app </query> <earliest>timepicker.earliest</earliest> <latest>timepicker.latest</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentageRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> </row> <form>        
Hi, Our application uses log4j2 logging framework. We are trying to send log signals created by Otel Logs SDK to Splunk cloud platform. Instead of fileReceiver, we want to send these over HTTP. We... See more...
Hi, Our application uses log4j2 logging framework. We are trying to send log signals created by Otel Logs SDK to Splunk cloud platform. Instead of fileReceiver, we want to send these over HTTP. We are using HTTP Event Collector to send the log records to Splunk Cloud. Our configuration for HEC exporter in OTEL Collector is: exporter: splunk_hec/logs: token: "<token>" endpoint: "https://<host>:8088/services/collector/raw" source: "otel" index: "logs" disable_compression: false tls: insecure_skip_verify: true service: pipelines: logs: receivers: [ otlp ] processors: [ batch] exporters: [ splunk_hec/logs] We do see the events being received at Splunk Cloud Platform, but we are not able to query the log data itself. Can someone guide if this is correct way ? or guide to correct resource. Thanks!
I have changed my appserver/static/javascript directory and the setup page that reffers to it does not update. I tried to uninstall the add-on, restart Splunk server, but it does not change... plea... See more...
I have changed my appserver/static/javascript directory and the setup page that reffers to it does not update. I tried to uninstall the add-on, restart Splunk server, but it does not change... please help me figure out what do I miss...  this is my setup page dashboard (./javascript/setup_page.js is the file I changed without any effects <dashboard isDashboard="false" version="1.1"            script="./javascript/setup_page.js"            stylesheet="./styles/setup_page.css"            hideEdit="true"            hideAppBar="true" >       <label>Setup Page</label>     <row>         <panel>             <html>                 <div id="main_container"></div>             </html>         </panel>     </row> </dashboard>