All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Thanks in Advance. 1.I have a json object as "content.List of Batches Processed{}" and Already splunk extract field as "content.List of Batches Processed{}.BatchID" and count it showing as 26 .But ... See more...
Thanks in Advance. 1.I have a json object as "content.List of Batches Processed{}" and Already splunk extract field as "content.List of Batches Processed{}.BatchID" and count it showing as 26 .But in the "content.List of Batches Processed{}.BatchID" we have 134 records. So i want to extract the multiple JSON values as field.From below logs i want to extract all the values from P_REQUEST_ID,P_BATCH_ID,P_TEMPLATE Query i tried to fetch the data | eval BatchID=spath("content.List of Batches Processed{}*", "content.List of Batches Processed{}.P_BATCH_ID"),Request=spath(_raw, "content.List of Batches Processed{}.P_REQUEST_ID")|table BatchID Request     "content" : { "List of Batches Processed" : [ { "P_REQUEST_ID" : "177", "P_BATCH_ID" : "1", "P_TEMPLATE" : "Template", "P_PERIOD" : "24", "P_MORE_BATCHES_EXISTS" : "Y", "P_ZUORA_FILE_NAME" : "Template20240306102852.csv", "P_MESSAGE" : "Data loaded in RevPro Successfully - Success: 10000 Failed: 0", "P_RETURN_STATUS" : "SUCCESS" }, { "P_REQUEST_ID" : "1r7", "P_BATCH_ID" : "2", "P_TEMPLATE" : "Template", "P_PERIOD" : "24", "P_MORE_BATCHES_EXISTS" : "Y", "P_ZUORA_FILE_NAME" : "Template20240306102852.csv", "P_MESSAGE" : "Data loaded in RevPro Successfully - Success: 10000 Failed: 0", "P_RETURN_STATUS" : "SUCCESS" }, { "P_REQUEST_ID" : "1577", "P_BATCH_ID" : "3", "P_TEMPLATE" : "Template", "P_PERIOD" : "24", "P_MORE_BATCHES_EXISTS" : "Y", "P_ZUORA_FILE_NAME" : "Template20240306102852.csv", "P_MESSAGE" : "Data loaded in RevPro Successfully - Success: 10000 Failed: 0", "P_RETURN_STATUS" : "SUCCESS" }, { "P_REQUEST_ID" : "16577", "P_BATCH_ID" : "4", "P_TEMPLATE" : "Template", "P_PERIOD" : "24", "P_MORE_BATCHES_EXISTS" : "Y", "P_ZUORA_FILE_NAME" : "Template20240306102852.csv", "P_MESSAGE" : "Data loaded in RevPro Successfully - Success: 10000 Failed: 0", "P_RETURN_STATUS" : "SUCCESS" }     . 
Hello All, I am getting the message in SH The searchhead is unable to update the peer information. Error = 'Unable to reach the cluster manager' for manager=https://x.x.x.x:8089, SH is trying to co... See more...
Hello All, I am getting the message in SH The searchhead is unable to update the peer information. Error = 'Unable to reach the cluster manager' for manager=https://x.x.x.x:8089, SH is trying to connect to the CM that is Standy. We have 2 CM, one of the CM is active and other one is StandBy, below are the configs in CM and SH. Could you please advice if anything is wrong in the config. Thanks! CM Config - [clustering] mode = manager manager_switchover_mode = auto manager_uri = clustermanager:dc1,clustermanager:dc2 pass4SymmKey = key multisite = true available_sites = site1,site2 site_replication_factor = origin:1,site1:1,site2:1,total:3 site_search_factor = origin:1,site2:1,site1:1,total:3 cluster_label = publisher_cluster access_logging_for_heartbeats = 0 cm_heartbeat_period = 3 precompress_cluster_bundle = 1 rebalance_threshold = 0.9 [clustermanager:dc1] manager_uri = https://x.x.x.x:8089 [clustermanager:dc2] manager_uri = https://x.x.x.x:8089 -- SH Config - [general] serverName = x.com pass4SymmKey = key site = site2 [clustering] mode = searchhead manager_uri = clustermanager:dc1, clustermanager:dc2 [clustermanager:dc1] multisite = true manager_uri = https://x.x.x.x:8089 pass4SymmKey = key [clustermanager:dc2] multisite = true manager_uri = https://x.x.x.x:8089 pass4SymmKey = key [replication_port://9100] [shclustering] conf_deploy_fetch_url = https://x.x.x.x:8089 mgmt_uri = https://x.x.x.x:8089 disabled = 0 pass4SymmKey = key replication_factor = 2 shcluster_label = publisher_shcluster id = B57109F1-5D63-4FC9-9BFC-BE6B0375D9A7 manual_detention = off Dhana
I've setup Splunk enterprise as a trial in a test domain however im having issues importing logs from different remote sources. Firstly it says connect to an LDAP before importing remote data. Tried ... See more...
I've setup Splunk enterprise as a trial in a test domain however im having issues importing logs from different remote sources. Firstly it says connect to an LDAP before importing remote data. Tried this however it wont connect to the domain, too many fields in here to fill in without giving examples. "Could not find userBaseDN on the LDAP server". I tried installing the Splunk forwarder on a Windows based DC and set the Splunk server forwarding and receiving to receive from port 9997. Then tried importing the host again and keep getting errors about WMI classes from host blah blah. Where is the documentation on setting up WMI for different remote sources? This piece should be easy. God help me when i try to add logs from networking devices. Real answers only please, no time wasters. Cheers,
i have a dashboard, In that there is a drop down for services. we have 10 panels in a dashboard. When i select service drop down my all 10 panels will get displays as per the service chosen from ... See more...
i have a dashboard, In that there is a drop down for services. we have 10 panels in a dashboard. When i select service drop down my all 10 panels will get displays as per the service chosen from the from drop down.  For example if i choose "passed services" from the drop down instead of showing all panels i want to see only panle1 to panel5  and hide panel 6 to panel 10  how i can do that? <form> <label>Services_Dashboard</label> <fieldset submitButton="true" autoRun="true"> <input type="time" token="time" searchWhenChanged="true"> <label> </label> <default> <earliest>-60m@m</earliest> <latest>now</latest> </default> </input> <inputype type="dropdown" token="services" searchWhenChanged="true"> <label>Services</label> <choice value="*">all</choice> <choice value="Lgn_srvc">Login services</choice> <choice value="Fld_srvc">Failed services</choice> <choice value="Pass_srvc">passed services</choice> <choice value="Tmout_srvc">timeout services</choice> <choice value="Lgout_srvc">logout service</choice> <choice value="Err_srvc">error services</choice> <choice value="War_srvc">warning services</choice> <intialValue>*</intialValue> </input> </fieldset> <row> <panel> <title>panel1 for $services$</title> <search> <query> index=xxx stats count by app</query> <earliest>time.earliest</earliest> <latest>time.latest</latest> </search> <option name="drilldown">none</option> <option name="numberPrecision">0</option> <option name="rangeValues">[0]</option> <option name="refresh.display">progressbar</option> </panel> <panel> <title>panel2 for $services$</title> <search> <query> index=xxx stats count by app</query> <earliest>time.earliest</earliest> <latest>time.latest</latest> </search> <option name="drilldown">none</option> <option name="numberPrecision">0</option> <option name="rangeValues">[0]</option> <option name="refresh.display">progressbar</option> </panel> <panel> <title>panel3 for $services$</title> <search> <query> index=xxx stats count by app</query> <earliest>time.earliest</earliest> <latest>time.latest</latest> </search> <option name="drilldown">none</option> <option name="numberPrecision">0</option> <option name="rangeValues">[0]</option> <option name="refresh.display">progressbar</option> </panel> <panel> <title>panel4 for $services$</title> <search> <query> index=xxx stats count by app</query> <earliest>time.earliest</earliest> <latest>time.latest</latest> </search> <option name="drilldown">none</option> <option name="numberPrecision">0</option> <option name="rangeValues">[0]</option> <option name="refresh.display">progressbar</option> </panel> <panel> <title>panel5 for $services$</title> <search> <query> index=xxx stats count by app</query> <earliest>time.earliest</earliest> <latest>time.latest</latest> </search> <option name="drilldown">none</option> <option name="numberPrecision">0</option> <option name="rangeValues">[0]</option> <option name="refresh.display">progressbar</option> </panel> <panel> <title>panel6 for $services$</title> <search> <query> index=xxx stats count by app</query> <earliest>time.earliest</earliest> <latest>time.latest</latest> </search> <option name="drilldown">none</option> <option name="numberPrecision">0</option> <option name="rangeValues">[0]</option> <option name="refresh.display">progressbar</option> </panel> <panel> <title>panel7 for $services$</title> <search> <query> index=xxx stats count by app</query> <earliest>time.earliest</earliest> <latest>time.latest</latest> </search> <option name="drilldown">none</option> <option name="numberPrecision">0</option> <option name="rangeValues">[0]</option> <option name="refresh.display">progressbar</option> </panel> <panel> <title>panel8 for $services$</title> <search> <query> index=xxx stats count by app</query> <earliest>time.earliest</earliest> <latest>time.latest</latest> </search> <option name="drilldown">none</option> <option name="numberPrecision">0</option> <option name="rangeValues">[0]</option> <option name="refresh.display">progressbar</option> </panel> <panel> <title>panel9 for $services$</title> <search> <query> index=xxx stats count by app</query> <earliest>time.earliest</earliest> <latest>time.latest</latest> </search> <option name="drilldown">none</option> <option name="numberPrecision">0</option> <option name="rangeValues">[0]</option> <option name="refresh.display">progressbar</option> </panel> <panel> <title>panel10 for $services$</title> <search> <query> index=xxx stats count by app</query> <earliest>time.earliest</earliest> <latest>time.latest</latest> </search> <option name="drilldown">none</option> <option name="numberPrecision">0</option> <option name="rangeValues">[0]</option> <option name="refresh.display">progressbar</option> </panel> </row> </form>  
I want to get pfsense logs to splunk to make some analysis. I tired this method "https://www.jaycroos.com/splunk-to-monitor-pfsence-logs/" but it didn't work for me. Now i want to try using Spl... See more...
I want to get pfsense logs to splunk to make some analysis. I tired this method "https://www.jaycroos.com/splunk-to-monitor-pfsence-logs/" but it didn't work for me. Now i want to try using Splunk universal forwarder, How can i install Splunk universal forwarder on my pfsense to get the logs to splunk ? Any guidance would be appreciated. please let me know if there is other method, where i can get my pfsense logs to the splunk server.
Hi Splunk Community, I need to create an alert that only gets triggered if two conditions are met. As a matter of fact, the conditions are layered: Search results are >3 in a 5-minute interval... See more...
Hi Splunk Community, I need to create an alert that only gets triggered if two conditions are met. As a matter of fact, the conditions are layered: Search results are >3 in a 5-minute interval. Condition 1 is true 3 times over a 15-minute interval. I thought I would create 3 sub-searches within the search and output the result in a "counter" and I would, then, run a search to identify if the "counter" values are >3: index=foo mal_code="foo" source="foo.log" | search "{\\\"status\\\":{\\\"serverStatusCode\\\":\\\"500\\\"" earliest=-5m@m latest=now | stats count as event_count1 | search "{\\\"status\\\":{\\\"serverStatusCode\\\":\\\"500\\\"" earliest=-10m@m latest=-5m@m | stats count as event_count2 | search "{\\\"status\\\":{\\\"serverStatusCode\\\":\\\"500\\\"" earliest=-15m@m latest=-10m@m | stats count as event_count3 | search event_count*>0 | stats count as result I am not sure my time modifiers are working correctly, but I am not getting the results I expected. I would appreciate if I could get some advice on how to go about this.
I am trying to create a props.conf to pass a custom timestamp. To do so I wanted to upload data and use the set source type page to configure timestamp parameters and then copy the props.conf to clip... See more...
I am trying to create a props.conf to pass a custom timestamp. To do so I wanted to upload data and use the set source type page to configure timestamp parameters and then copy the props.conf to clipboard. However, the preview on this page does not update when I click "Apply Settings". The preview only changes when I select a new source type from the dropdown next to "Save As", changing anything in "Event Breaks", "Timestamp", or "Advanced" does nothing. Some thing to note is there is a red exclamation point in the top left saying "Can only preview uploaded files", unsure what this means. When I do save the data and search it, it DOES look like the source type changes I made took effect, but this really isn't a feasible way to test and configure my parameters. Any way to get this visible in "Add Data"s preview?
Need assistance with this, have installed the app, pointed to the address of the on prem server we have housing bluecat, ensured that account we created can login, has api access. I have pointed it t... See more...
Need assistance with this, have installed the app, pointed to the address of the on prem server we have housing bluecat, ensured that account we created can login, has api access. I have pointed it there, given credentials but nothing is being pulled from bluecat into splunk. A little assistance, I read the README files, and didnt help much.    Thanks, Justin
Hello, I'm attempting to change the sourcetype and host on a single event. The tricky part is I want the second transform based on the change from the first transform For Example, My data comes... See more...
Hello, I'm attempting to change the sourcetype and host on a single event. The tricky part is I want the second transform based on the change from the first transform For Example, My data comes in as   index=main host=heavy_forwarder sourcetype=aws:logbucket   I want the data to change to   index=main host=amazonfsx.host sourcetype=XmlWinEventLog   The catch is that I have other sourcetypes coming in as aws:logbucket and getting transformed to various other sourcetypes (cloudtrail, config, etc). On these events I do not want to run the regex to change the host value   If I have a props.conf file that states TRANSFORMS-modify_data = aws_fsx_sourcetype, aws_fsx_host And a transforms.conf of [aws_fsx_sourcetype] SOURCE_KEY = MetaData:Source REGEX = ^source::s3:\/\/fsxbucket\/.* FORMAT = sourcetype::XmlWinEventLog DEST_KEY = MetaData:Sourcetype [aws_fsx_host] REGEX = <Computer>([^.<]+).*?<\/Computer> FORMAT = host::$1 DEST_KEY = MetaData:Host   I'm worried this will have unexpected results on the other sourcetypes that aws:logucket has, like cloudtrail and config. If I break it out with two separate transforms, like this   TRANSFORMS-modify_data = aws_fsx_sourcetype TRANSFORMS-modify_data2 = aws_fsx_host   I'm worried the typing pipeline won't see the second transform. What is the most effective way to accomplish this?   Thanks, Nate
We are having difficulty getting exclusions of logs that have fields in Camelcase or have entries that have special characters related to OTEL logs. Fields without capitalization and/or special chara... See more...
We are having difficulty getting exclusions of logs that have fields in Camelcase or have entries that have special characters related to OTEL logs. Fields without capitalization and/or special character values are able to be parsed out, but not others. Here is an example log that we are looking at (attached as yaml and key portion).         filelog/kube-apiserver-audit-log: include: - /var/log/kubernetes/kube-apiserver.log include_file_name: false include_file_path: true operators: - id: extract-audit-group type: regex_parser regex: '\s*\"resourceGroup\"\s*\:\s*\"(?P<extracted_group>[^\"]+)\"\s*' - id: filter-group type: filter expr: 'attributes.extracted_beta == "batch"' - id: remove-extracted-group type: remove field: attributes.extracted_group - id: extract-audit-api type: regex_parser regex: '\"level\"\:\"(?P<extracted_audit_beta>[^\"]+)\"' - id: filter-api type: filter expr: 'attributes.extracted_audit_beta == "Metadata"' - id: remove-extracted-api type: remove field: attributes.extracted_api - id: extract-audit-verb type: regex_parser regex: '\"verb\"\:\"(?P<extracted_verb>[^\"]+)\"' - id: filter-verb type: filter expr: 'attributes.extracted_verb == "watch" || attributes.extracted_verb == "list"' - id: remove-extracted-verb type: remove field: attributes.extracted_verb The resourceGroup field is compared to something else and failing, verb and level are succeeding. Here is an example log that would be pulled in. {"apiVersion":"batch/v1","component":"sync-agent","eventType":"MODIFIED","kind":"CronJob","level":"info","msg":"sent event","name":"agentupdater-workload","namespace":"vmware-system-tmc","resourceGroup":"batch","resourceType":"cronjobs","resourceVersion":"v1","time":"2024-03-14T18:17:11Z"}
Hello - Trying to create a query that will output additions to Azure security groups memberships. I am able to successfully output the information I need, but in the newValue field, it contains mult... See more...
Hello - Trying to create a query that will output additions to Azure security groups memberships. I am able to successfully output the information I need, but in the newValue field, it contains multiple different values. How do I omit the 'null' value and the security group IDs. I only want it to show that actual name of the security group. The way the logs are currently parsed, all of those values are in the same field - "properties.targetResources{}.modifiedProperties{}.newValue" Query:   index="azure-activity" | search operationName="Add member to group" | stats count by "properties.initiatedBy.user.userPrincipalName", "properties.targetResources{}.userPrincipalName", "properties.targetResources{}.modifiedProperties{}.newValue", operationName, _time    Output:    
I'm trying to build a query to give real time results for a value, but the is a time delay between the data send and indexed. This means when I do a realtime query for last 60s, I get 20s of data and... See more...
I'm trying to build a query to give real time results for a value, but the is a time delay between the data send and indexed. This means when I do a realtime query for last 60s, I get 20s of data and 40s of blank. I'd like to load the last 60s of recieved data in realtime, not the data recieved in the last 60s. Any ideas? I've tried index=ind sourcetype=src (type=instrument) | where temperature!="" | timechart span=1s values(temperature) and index=ind sourcetype=src (type=instrument) | where temperature!= NULL | timechart span=1s values(temperature) No luck with either
Hi, I want to extract value c611b43d-a574-4636-9116-ec45fe8090f8 from below. Could you please let me know how I can do using rex field=httpURL   httpURL: /peerpayment/v1/payment/c611b43d-a574-463... See more...
Hi, I want to extract value c611b43d-a574-4636-9116-ec45fe8090f8 from below. Could you please let me know how I can do using rex field=httpURL   httpURL: /peerpayment/v1/payment/c611b43d-a574-4636-9116-ec45fe8090f8/performAction
Hi, I have multiple searches that follow a naming convention like "Server1_Monitoring", "Server2_Monitoring", and so on. I used an asterisk in the search name stanza to match all of them in the save... See more...
Hi, I have multiple searches that follow a naming convention like "Server1_Monitoring", "Server2_Monitoring", and so on. I used an asterisk in the search name stanza to match all of them in the savedsearches.conf file, like: [Server*_Monitoring] dispatch.ttl=3p I restarted the search head after the change, but it didn't work. Is there any way to avoid listing all the searches explicitly in savedsearches.conf? Thank you!
Hi All, We are deploying out the Splunk Universal Forwarder at the moment which is going well but I'm looking at getting this installed on our Citrix Infrastructure. In our environment, we have "Go... See more...
Hi All, We are deploying out the Splunk Universal Forwarder at the moment which is going well but I'm looking at getting this installed on our Citrix Infrastructure. In our environment, we have "Golden Images" where we make changes. Once published (Using PVS), the new image will be deployed out to the Citrix servers in that specific delivery group. When the server/s (None-Persistent) in that group performs their reboots which is nightly they pick up the golden image via PVS. Using the clone prep command works and the new client comes through in the Forwarder Management without any issues which I'm happy with as it is working as expected but as these servers reboot every night, I'm finding that duplicate entries of the same servers are being created when the reboot completes and Splunk connects to the deployment server. I'm presuming that this is because the GUID is changing every time these servers reboot in the night but I want to know if there is a way to ensure it uses the same GUID for that hostname so that it avoids creating duplicate records in the Forwarder Management console? Or if there is some option somewhere where Splunk identifies a duplicate hostname and removes it automatically? For example, this is how it works: SERVERMAIN01 - Citrix Maintenance Server where golden images are attached and changes can be made. SERVERAPP01 - Application server which picks up golden image (None-Persistent) and is rebooted nightly SERVERAPP02 - Application server which picks up golden image (None-Persistent) and is rebooted nightly SERVERAPP03 - Application server which picks up golden image (None-Persistent) and is rebooted nightly So essentially, I'm getting duplicate clients in the Forwarder Management for SERVERAPP01/02/03 every night which will just build up over time unless I manually intervene which takes up my time. Hope this all makes sense and someone can point me in the right direction as I've searched around for a while and can't find any posts around this. Cheers,
Currently, we are using the ITSI Module along with the Splunk_TA_snow addon to create incidents on ServiceNow and this is working as expected. We have a new requirement now to create TASKs along w... See more...
Currently, we are using the ITSI Module along with the Splunk_TA_snow addon to create incidents on ServiceNow and this is working as expected. We have a new requirement now to create TASKs along with the incidents. We went through the scripts of ServiceNow and the documentation and we couldn't find anything that could help us.   My questions are 1. do we have this feature within the current scope of the addon? 2. If not, can this be customized?
Hi Guys, In this case statement i am getting jobType values but i am not getting Status value. I already mentioned the keyword above in the query But why i am not getting . index="mulesoft" applica... See more...
Hi Guys, In this case statement i am getting jobType values but i am not getting Status value. I already mentioned the keyword above in the query But why i am not getting . index="mulesoft" applicationName="s-concur-api" environment=DEV timestamp ("onDemand Flow for concur Expense Report file with FileID Started" OR "Exchange Rates Scheduler process started" OR "Exchange Rates Process Completed. File successfully sent to Concur")|transaction correlationId| rename timestamp as Timestamp correlationId as CorrelationId tracePoint as TracePoint content.payload.TargetFileName as TargetFileName | eval JobType=case(like('message',"%onDemand Flow for concur Expense Report file with FileID Started%"),"OnDemand",like('message',"%Exchange Rates Scheduler process started%"),"Scheduled", true() , "Unknown")| eval Status=case(like('message',"Exchange Rates Process Completed. File sucessfully sent to Concur"),"SUCCESS",like('TracePoint',"%EXCEPTION%"),"ERROR") |table JobType Status    
We had PS create a report but, I can't seem to figure out what setting he set to show a time base chart without a time-based command.   He didn't use dashboard.   The graphic only shows on the re... See more...
We had PS create a report but, I can't seem to figure out what setting he set to show a time base chart without a time-based command.   He didn't use dashboard.   The graphic only shows on the report?  I want the ability to do similar type of visualization but, I can't figure what setting cause the visual output.
How to know if the SDK is initialized or not in react native?
<row> <panel depends="$tok_tab_1$"> <table> <title>Alerts Fired</title> <search> <query> index=_audit action=alert_fired | rename ss_name AS Alert | stats latest... See more...
<row> <panel depends="$tok_tab_1$"> <table> <title>Alerts Fired</title> <search> <query> index=_audit action=alert_fired | rename ss_name AS Alert | stats latest(_time) AS "Event_Time" sparkline AS "Alerts Per Day" count AS "Times Fired" first(sid) AS sid by Alert | eval Event_Time=strftime(Event_Time,"%m/%d/%y %I:%M:%S %P") | rename Event_Time AS "Last Fired" | sort -"Times Fired" </query> <earliest>$time.earliest$</earliest> <latest>$time.latest$</latest> </search> <fields>Alert, "Last Fired", "Times Fired", "Alerts Per Day"</fields> <option name="count">10</option> <option name="dataOverlayMode">heatmap</option> <option name="drilldown">cell</option> <option name="rowNumbers">false</option> <option name="wrap">true</option> <drilldown> <set token="sid">$row.sid$</set> <unset token="tok_tab_1"></unset> <set token="tok_tab_2">active</set> <set token="tok_display_dd"></set> <set token="Alert">$row.Alert$</set> <link target="_blank">search?sid=$row.sid$</link> </drilldown> </table> </panel> </row> <row> <panel depends="$tok_tab_2$"> <table> <title>$Alert$</title> <search> <query>| search?sid=$sid$</query> <earliest>$earliest$</earliest> <latest>$latest$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row>     In the code above the below line works correctly opening a new search tab with the Alert search query. <link target="_blank">search?sid=$row.sid$</link> I would like to know how to have this same functionality, but within a token so I can keep it on the same page within another table.