All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello I want to ask a question about subsearch. When submitting a fed command without using it, an error message occurs as follows. Before setting federated search ] index=fw | join src_ip... See more...
Hello I want to ask a question about subsearch. When submitting a fed command without using it, an error message occurs as follows. Before setting federated search ] index=fw | join src_ip [ sourcetype=ips | stats count by src_ip ] >> Result : OK After setting federated search ] index=fw | join src_ip [ sourcetype=ips | stats count by src_ip ] >> Result : NG Error : Search command can only accept one federated index. Is there any solution?
i am using below query which running very slow. how to modify this query to search and run faster  
Hi, I have looked at Threat match "src" under Threat Intelligence Manager. In the configuration the datamodel DNS Resolution is enabled and the match field is DNS.query. However, in the genera... See more...
Hi, I have looked at Threat match "src" under Threat Intelligence Manager. In the configuration the datamodel DNS Resolution is enabled and the match field is DNS.query. However, in the generated SPL i find these to lines:   | eval "threat_match_field"=if(isnull('threat_match_field'),"src",'threat_match_field') | eval "threat_match_value"=if(isnull('threat_match_value'),'DNS.query','threat_match_value')   This will change the threat_match_field to src, but I would have thought it should be "query"? And this will make a wrong description in the Threat Activity use case when the fields are populated. Is this a fault, have anyone else noticed this?
[02-23 13:55:00] INFO LoggerMessageProcessor [[MuleRuntime].uber.31: [emea-order-mgmt-sys-uat].postOrderMgmtSysFlow.CPU_INTENSIVE @3473fb44]: { "externalTrackingId": "567", "globalTransactionId": "cd... See more...
[02-23 13:55:00] INFO LoggerMessageProcessor [[MuleRuntime].uber.31: [emea-order-mgmt-sys-uat].postOrderMgmtSysFlow.CPU_INTENSIVE @3473fb44]: { "externalTrackingId": "567", "globalTransactionId": "cd535f86-38d4-4f1c-9d1f-e18bc745df21", "muleTransactionId": "c2d3f7f9-1743-4bde-931d-ac59987bb42e", "applicationName": "emea-order-mgmt-sys-uat", "httpMethod": "POST", "processName": "postOrderMgmtSysFlow", "environment": "uat", "src": "dummy_src", "target": "TargetSystemName", "milestoneStatus": "SuccessResponseReturned", "targetResponseTime": 0, "muleProcessingTime": 13}Collapsedate_hour = 13date_mday = 23date_minute = 55date_month = februarydate_second = 0date_wday = thursdaydate_year = 2023date_zone = localhost = http-inputs-olympus-eu.splunkcloud.comindex = mulesoft-emea-dev-demolinecount = 14punct = [-_::]____[[]..:_[----].._@]:_{__"":_"",__"":_"---source = http:mulesoftsourcetype = log4jsplunk_server = idx-i-01f4e4672afe12c83.olympus-eu.splunkcloud.comtimeendpos = 15timestartpos = 1
How to add time input when my i have to use it to the x-axis, where my x-axis has the data per week (x-axis is weekly in date format). I have added the token and the earliest and latest part to xml a... See more...
How to add time input when my i have to use it to the x-axis, where my x-axis has the data per week (x-axis is weekly in date format). I have added the token and the earliest and latest part to xml also but its still not working      
 My components column has too much values so I tried to add a horizontal scroll using CSS style tag with ID selector but i am unable to use the same style body for all the other panel or other c... See more...
 My components column has too much values so I tried to add a horizontal scroll using CSS style tag with ID selector but i am unable to use the same style body for all the other panel or other charts. I am getting duplicate ID error. So how to make use of style body for multiple panel or charts?
Hi, I want to create an alert with two condition to meet by sequence before the alert can trigger. We are using eventID for each condition. First condition: eventID 4625 count more than 10 by sou... See more...
Hi, I want to create an alert with two condition to meet by sequence before the alert can trigger. We are using eventID for each condition. First condition: eventID 4625 count more than 10 by source ip Second condition: eventID 4624 count more than 1 by source ip I want the query to meet the first condition, followed by second condition and populate a result. Please assist. Thank you.
I'm using Splunk Cloud Trial and wanna test HEC I used below command and I received error message   1) curl -H "Authorization: Splunk [HEC-Token]"   [URL : prd-mysplunkcloudurl-splunkcloou.com:80... See more...
I'm using Splunk Cloud Trial and wanna test HEC I used below command and I received error message   1) curl -H "Authorization: Splunk [HEC-Token]"   [URL : prd-mysplunkcloudurl-splunkcloou.com:8088/services/collector/event] -d '{"sourcetype": "my_sample_data", "event": "http auth ftw!"}' curl: (60) SSL: certificate subject name 'SplunkServerDefaultCert' does not match target host name 'prd-mysplunkcloudurl-.splunkcloud.com' More details here: https://curl.se/docs/sslcerts.html   2) Also I tried another url and received not host error message tried url : https://http-inputs.prd-...-splunkcloou.com:8088/services/collector/event curl: (6) Could not resolve host: http-inputs.prd-mysplunkcloudurl.splunkcloud.com   How can I use HEC?
I have a dashboard which contains 5 panels in table format. Query for panel1: index=xxxx sourcetype=xxxxx  stroage_name=CompleteTransactions  |table Description application _time  count streams... See more...
I have a dashboard which contains 5 panels in table format. Query for panel1: index=xxxx sourcetype=xxxxx  stroage_name=CompleteTransactions  |table Description application _time  count streamstats current=f window=1 values( Description) as desp   values(application) as app values(_time) as totaltime  values (count) as totalcount |eval siml=if(application == app AND Description == desp, count - totalcount,0) |where siml > 0 |stats sum(siml) as totalrequest by application output: Description application _time count ampt.gc.com ampt-portal 2023-01-16 14:00:56.456 100 ampt.gc.com ampt-login 2023-01-16 12:00:56.400 20 ampt.gc.com ampt-clientid 2023-01-16 11:00:36.406 50 Similar to panel 1 query  we have other 4 panels  with different field names.  The task is i need to get the output of 5 panels into a summary index. Retention period  to 60 days query need to run for every 24 hours So need to create a report for everyday for last 24 hours and need to create report to collect everydata into summary index. so that if they search for last 60 days data should display. how can i do this
Hi Community.  I need to install multiple apps and add-ons to our Splunk Cloud (Victoria Experience) from self-service.  Can I install them all at once, one after the other, or do I need to wait for ... See more...
Hi Community.  I need to install multiple apps and add-ons to our Splunk Cloud (Victoria Experience) from self-service.  Can I install them all at once, one after the other, or do I need to wait for search head restarts or rolling restarts to complete before moving onto the next app/add-on for installation?  And what are some _internal logs that I can watch during the process to make sure everything is ok?
I've successfully configured signalfx to create Jira tickets per: https://docs.splunk.com/Observability/admin/notif-services/jira.html The documentation says a comment will be added to the Jira whe... See more...
I've successfully configured signalfx to create Jira tickets per: https://docs.splunk.com/Observability/admin/notif-services/jira.html The documentation says a comment will be added to the Jira when the alert is resolved.  This is not happening for many of my alerts.  I suspect the problem is happening for alerts that are short lived.    Has anyone else encountered this?  Know of any fix or solution?
I installed Splunk standalone with https://splunk.github.io/splunk-ansible/ Version 9.0.4 on Ubuntu jammy 22.04.2 Instance is up and seems running fine but after configuring data ingestion, I ge... See more...
I installed Splunk standalone with https://splunk.github.io/splunk-ansible/ Version 9.0.4 on Ubuntu jammy 22.04.2 Instance is up and seems running fine but after configuring data ingestion, I get nothing available in search I tested 4 different inputs and none of them worked. I suspect something related to indexing but did not identify where this could be tracked down (log file, splunk table...) Any advices? Thanks * Data receiving on port 9997 * Data input: TCP for syslog data * Local Log file: /var/log/dpkg.log * Local systemd-journald I can find ingestion activities in /opt/splunk/var/log/splunk/metrics.log but not in `index=_* component=metrics | stats count BY index,component,group,host` (no group=tcpin_connections). It hald systemd-journal permission access issue until I fixed it by adding splunk user to systemd-journal group. Network ports are listeing from `ss -tunap` and tested successfully with `nc` `curl -vk -u user:pass https://localhost:8089/services/admin/inputstatus/TailingProcessor:FileStatus > TailingProcessor-FileStatus.xml` confirms ingestion for local log file only `index=_internal source=*metrics.log tcpin_connections` = no results `| tstats count WHERE index=* OR index=_* BY host` = have data but only splunk internal index (aka _*) and localhost/splunk server `index=_internal component!="Metrics" | stats count BY index,component,group,host` = only _internal/LMStack*/Trial/splunkhost Settings > Licensing: "Trial license group": current = No licensing alerts/violations, 0% of quota Settings > Monitoring console: returns nothing on indexing (0 KB/s), historic data only shows internal sources spike at install time. Splunkd system status is green. Tried to restart a few times but did not help I checked the following resources but did not find the issue https://docs.splunk.com/Documentation/Splunk/9.0.4/Forwarding/Receiverconnection https://docs.splunk.com/Documentation/Splunk/9.0.4/Troubleshooting/Cantfinddata https://community.splunk.com/t5/Getting-Data-In/What-are-the-basic-troubleshooting-steps-in-case-of-universal/td-p/456364   The only warning that I have from Web console is on Resource usage IOWait as this is a lab system without production specs. To me, that should only slow down things but not block.   Extract from config     # more /opt/splunk/etc/apps/search/local/inputs.conf [monitor:///var/log/dpkg.log] disabled = false host = mlsplunk002 index = dpkg sourcetype = dpkg [tcp://9514] connection_host = dns host = mlsplunk002 index = syslog sourcetype = syslog [journald://journald] interval = 30 journalctl-exclude-fields = __MONOTONIC_TIMESTAMP,__SOURCE_REALTIME_TIMESTAMP journalctl-include-fields = PRIORITY,_SYSTEMD_UNIT,_SYSTEMD_CGROUP,_TRANSPORT,_PID,_UID,_MACHINE_ID,_GID,_COMM,_EXE journalctl-quiet = true [tcp://6514] connection_host = dns host = mlsplunk002 index = syslog sourcetype = syslog # more /opt/splunk/etc/apps/search/local/props.conf [dpkg] LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Custom description = /var/log/dpkg.log disabled = false pulldown_type = true # more /opt/splunk/etc/apps/search/local/indexes.conf [dpkg] coldPath = $SPLUNK_DB/dpkg/colddb enableDataIntegrityControl = 0 enableTsidxReduction = 0 homePath = $SPLUNK_DB/dpkg/db maxTotalDataSizeMB = 512000 thawedPath = $SPLUNK_DB/dpkg/thaweddb [syslog] coldPath = $SPLUNK_DB/syslog/colddb enableDataIntegrityControl = 0 enableTsidxReduction = 0 homePath = $SPLUNK_DB/syslog/db maxTotalDataSizeMB = 512000 thawedPath = $SPLUNK_DB/syslog/thaweddb      
Recently, I ingested data from a windows event log going back 3 years using the XmlWinEventLog sourcetype. Later, I switched the sourcetype to wineventlog which gave me a easier way to extract fields... See more...
Recently, I ingested data from a windows event log going back 3 years using the XmlWinEventLog sourcetype. Later, I switched the sourcetype to wineventlog which gave me a easier way to extract fields in the events. I deleted the data and the index and re-created it in hopes of re-ingesting all the events using wineventlog going back 3 years. However. Now, I'm only able to ingest the new events flowing from that event log.  And yes, I am using "ALL TIME" as the timeline. Is there a way to force Splunk to  scrape everything in an event log?
I'm working on building non production dashboard for our app with Dashboard Studio. I want to have some values in my queries set via a token so I can easily modify the value when I copy the dashboard... See more...
I'm working on building non production dashboard for our app with Dashboard Studio. I want to have some values in my queries set via a token so I can easily modify the value when I copy the dashboard to production. I have some success with a text input to save the value for me, and trying to set the token value without a input. I have find  this page which talk about I can set default values for tokens, but it seems don't work. Is there anyway to do it? Working version with input: { "visualizations": { "viz_rMCf491V": { "type": "viz.column", "title": "Response Status Code", "description": "", "dataSources": { "primary": "ds_kfldHPlv" } } }, "dataSources": { "ds_kfldHPlv": { "type": "ds.search", "options": { "query": "index=$index_token$ | timechart count by STATUS" }, "name": "ds_response_status_code" } }, "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "latest": "$global_time.latest$", "earliest": "$global_time.earliest$" } } } } }, "inputs": { "input_global_trp": { "type": "input.timerange", "options": { "token": "global_time", "defaultValue": "-60m@m,now" }, "title": "Global Time Range" }, "input_lpS6S4xZ": { "options": { "defaultValue": "some_index_value", "token": "index_token" }, "title": "Gateway Index", "type": "input.text" } }, "layout": { "type": "grid", "options": {}, "structure": [ { "item": "viz_rMCf491V", "type": "block", "position": { "x": 0, "y": 0, "w": 1200, "h": 400 } } ], "globalInputs": [ "input_global_trp", "input_lpS6S4xZ" ] }, "description": "", "title": "Sample Dashboard" } <p><br />Not working version:<li-code lang="markup">{ "visualizations": { "viz_rMCf491V": { "type": "viz.column", "title": "Response Status Code", "description": "", "dataSources": { "primary": "ds_kfldHPlv" } } }, "dataSources": { "ds_kfldHPlv": { "type": "ds.search", "options": { "query": "index=$index_token$ | timechart count by STATUS" }, "name": "ds_response_status_code" } }, "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "latest": "$global_time.latest$", "earliest": "$global_time.earliest$" } } } }, "tokens": { "default": { "index_token": { "value": "some_index_value" } } } }, "inputs": { "input_global_trp": { "type": "input.timerange", "options": { "token": "global_time", "defaultValue": "-60m@m,now" }, "title": "Global Time Range" } }, "layout": { "type": "grid", "options": {}, "structure": [ { "item": "viz_rMCf491V", "type": "block", "position": { "x": 0, "y": 0, "w": 1200, "h": 400 } } ] }, "description": "", "title": "Sample Dashboard" } Not working version with defaults: { "visualizations": { "viz_rMCf491V": { "type": "viz.column", "title": "Response Status Code", "description": "", "dataSources": { "primary": "ds_kfldHPlv" } } }, "dataSources": { "ds_kfldHPlv": { "type": "ds.search", "options": { "query": "index=$index_token$ | timechart count by STATUS" }, "name": "ds_response_status_code" } }, "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "latest": "$global_time.latest$", "earliest": "$global_time.earliest$" } } } } }, "inputs": { "input_global_trp": { "type": "input.timerange", "options": { "token": "global_time", "defaultValue": "-60m@m,now" }, "title": "Global Time Range" }, "input_lpS6S4xZ": { "options": { "defaultValue": "some_index_value", "token": "index_token" }, "title": "Gateway Index", "type": "input.text" } }, "layout": { "type": "grid", "options": {}, "structure": [ { "item": "viz_rMCf491V", "type": "block", "position": { "x": 0, "y": 0, "w": 1200, "h": 400 } } ], "globalInputs": [ "input_global_trp", "input_lpS6S4xZ" ] }, "description": "", "title": "Sample Dashboard" } <p><br />Not working version:<li-code lang="markup">{ "visualizations": { "viz_rMCf491V": { "type": "viz.column", "title": "Response Status Code", "description": "", "dataSources": { "primary": "ds_kfldHPlv" } } }, "dataSources": { "ds_kfldHPlv": { "type": "ds.search", "options": { "query": "index=$index_token$ | timechart count by STATUS" }, "name": "ds_response_status_code" } }, "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "latest": "$global_time.latest$", "earliest": "$global_time.earliest$" } } } }, "tokens": { "default": { "index_token": { "value": "some_index_value" } } } }, "inputs": { "input_global_trp": { "type": "input.timerange", "options": { "token": "global_time", "defaultValue": "-60m@m,now" }, "title": "Global Time Range" } }, "layout": { "type": "grid", "options": {}, "structure": [ { "item": "viz_rMCf491V", "type": "block", "position": { "x": 0, "y": 0, "w": 1200, "h": 400 } } ] }, "description": "", "title": "Sample Dashboard" }        
Is there anyway to run a series of queries - anywhere from 10 to 60 - and have a report generated? I'm being discouraged from using the API (and I'm not an administrator) to run queries, but I'm str... See more...
Is there anyway to run a series of queries - anywhere from 10 to 60 - and have a report generated? I'm being discouraged from using the API (and I'm not an administrator) to run queries, but I'm struggling to figure out another way that I can automate running the queries that I'm creating based on alerts from another application.
Hi folks, I'm using Splunk Cloud and I'm getting only 200 fields extracted or less. After checking limits.conf it seems like it can be increased, but not sure If I can modify it by myself. So t... See more...
Hi folks, I'm using Splunk Cloud and I'm getting only 200 fields extracted or less. After checking limits.conf it seems like it can be increased, but not sure If I can modify it by myself. So these are my questions:  Can I increase it using ACS? Is this a global change or can it be apply to a specific index? What is the limit for this setting? It does not specify it in the limits.conf Lastly, can I have performance issues by setting it to 0 or increasing it a lot? Thanks in advance!
Hello.  Try to create a custom alert that does the following. Monitor Real Time if within certain source "Connection was lost" and then if "Connection has been obtained" is not in the log after 2 m... See more...
Hello.  Try to create a custom alert that does the following. Monitor Real Time if within certain source "Connection was lost" and then if "Connection has been obtained" is not in the log after 2 minutes, send an email alert out. I am trying to setup my alert on my search string for Connection  was lost and then put in Trigger alert when "index="main" source="app-api" "Connection has been obtained". I am receiving an error stating "Cannot parse alert condition. Unknown search command 'index'.." in the Save As Alert window.   Any guidance on how to meet the criteria?   Thanks in advance.  
We were getting data in haven't changed much except our network configs. We are getting an error seen in pic.  I have tried updating key in outputs.conf  in forwarders and on master indexer as well... See more...
We were getting data in haven't changed much except our network configs. We are getting an error seen in pic.  I have tried updating key in outputs.conf  in forwarders and on master indexer as well and did restarts.    
I have several apps I have built in Splunk Enterprise 8.2.5. Each one is in a separate folder under /etc/apps on my search head and each has numerous lookups/macros etc. configured. The problem I hav... See more...
I have several apps I have built in Splunk Enterprise 8.2.5. Each one is in a separate folder under /etc/apps on my search head and each has numerous lookups/macros etc. configured. The problem I have is I wish to combine them all into a single app as they are all used by the same people. The problem is I source control them in Git so I can easily make a change, update Gut and re-deploy the app to the search head. This amounts to it clearing out the app/<app> folder and pulling down the latest version from Git. This works great. Now if I move everything to a single app how can I keep a folder for each 'sub app' for I can keep my Git model? Essentially I want this new app to just have a set of Navigations/Dashboards 
I noticed that when I downloaded the newest verison of Splunk Security Essentials 3.7.0, it is a SPL and not a TGZ like all of the other splunk apps are. We have it programmed when uploading this zip... See more...
I noticed that when I downloaded the newest verison of Splunk Security Essentials 3.7.0, it is a SPL and not a TGZ like all of the other splunk apps are. We have it programmed when uploading this zipped file to take in TGZ and not SPL. Do I need to zip this as a TGZ? What's up with that? Thanks!