All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

The below snip from my dashboard works fine when we run in the query in straight spl, however, when run via Dashboard Panel, I can see the proper query in the inspector but for some reason (I don't k... See more...
The below snip from my dashboard works fine when we run in the query in straight spl, however, when run via Dashboard Panel, I can see the proper query in the inspector but for some reason (I don't know why) it doesn't show any results. Note: When I go down towards the inspector and select "open in search" I see the expected results. Any ideas? I've tried opening up the permissions yet still nothing. I would provide more details but I can't think of anything else that matters. Thank you in advance. <row> <panel> <title>IP Lookup</title> <input type="text" token="the_ip" searchWhenChanged="true"> <label>IP</label> </input> <event> <title>IP Lookup</title> <search> <query>| inputlookup ip_list.csv where ip=$the_ip$</query> <earliest>-15m</earliest> <latest>now</latest> </search> <option name="list.drilldown">none</option> </event> </panel> </row>
Hi everyone, Can someone please help with a search I'm trying to create. My end goal is to capture which user account logged into the server and have a time associated with their login. My sea... See more...
Hi everyone, Can someone please help with a search I'm trying to create. My end goal is to capture which user account logged into the server and have a time associated with their login. My search so far is below. This only gives me the count of how many times the users logged in for the past "x" days. index="wineventlog" host="Redacted" source="XmlWinEventLog:Security" | stats count by SubjectUserSid
What are the best configuration settings for using pgBadger to analyze Splunk Phantom's PostgreSQL logs?
Hello everyone, I am trying to join using "Table" as common field, here is my query. index=prod source=A | stats count by PROD TABLENAME_PROD Partition_Column_PROD INI_TRANS_PROD Table Col... See more...
Hello everyone, I am trying to join using "Table" as common field, here is my query. index=prod source=A | stats count by PROD TABLENAME_PROD Partition_Column_PROD INI_TRANS_PROD Table Column Trans | sort TABLENAME_PROD | join type=left Table [ search index=preprod source=B | stats count by CAP TABLENAME_CAP Partition_Column_CAP INI_TRANS_CAP Table Column Trans |sort TABLENAME_CAP ] | table Partition_Column_PROD Partition_Column_CAP The values that i am getting here is not matching to those if i run both commands separately and join there output manually(keeping Table as common field) I.e values of Partition_Column_CAP,Partition_Column_PROD , of this query should match with values of Partition_Column_CAP , Partition_Column_PROD which i will get if i would run these queries separately . original Output of above query Partition_Column_PROD Partition_Column_CAP (ACS_ID, ACS_ID) (ACS_ID) (ADDR_ID, ADDR_ID) (ADDR_ID) (CITY, CITY, ADDR_ID, ADDR_ID) (ADDR_ID) (ALFRESCO_MSTR_REC_ID) (ALFRESCO_MSTR_REC_ID) (APPL_ID, APPL_ID) (APPL_ID) (ACS_METHD_ID, ACS_METHD_ID) (ACS_METHD_ID) (APPL_CMPNT_ID, APPL_CMPNT_ID) (APPL_CMPNT_ID) (CMPNT_TYP_ID, CMPNT_TYP_ID) (APPL_CMPNT_ID) Expected output Partition_Column_PROD Partition_Column_CAP (ACS_ID, ACS_ID) (ACS_ID) (ADDR_ID, ADDR_ID) (ADDR_ID) (CITY, CITY, ADDR_ID, ADDR_ID) (CITY, ADDR_ID) (ALFRESCO_MSTR_REC_ID) (ALFRESCO_MSTR_REC_ID) (APPL_ID, APPL_ID) (APPL_ID) (ACS_METHD_ID, ACS_METHD_ID) (ACS_METHD_ID) (APPL_CMPNT_ID, APPL_CMPNT_ID) (APPL_CMPNT_ID) (CMPNT_TYP_ID, CMPNT_TYP_ID) (CMPNT_TYP_ID) In above results Partition_Column_PROD and Partition_Column_CAP are from both searches(search and sub search) which is joined manually. There are no repetitive values for the second part of search query ,for ex the field Partition_Column_CAP has these 3 different values (ACS_ID) , (ADDR_ID) ,(CITY, ADDR_ID) and each event has unique value now when i add this second part of search query to the join command i start seeing repetitive values for the same mentioned field mentioned above, which should not be the case
When searching IIS logs, there is some irregularity on the host field. Originally this was reporting as IIS logs where missing but that the OS logs where showing up as expected. When I original... See more...
When searching IIS logs, there is some irregularity on the host field. Originally this was reporting as IIS logs where missing but that the OS logs where showing up as expected. When I originally when to investigate this, I picked a particular server and did a tstats and I saw both the OS and IIS logs. | tstats count where host=pdwww1 by index sourcetype Since I saw the IIS logs in tstats, I went to pull them up ( index=iis host=pdwww1 ) and received no results. I then did a generic index search and the logs where there, but I noticed there was no host field. That's really odd since the operating system logs are showing up as expected and the host value for those logs and the IIS logs is from the configuration merging of the host value in $SPLUNK_HOME/etc/system/local.inputs.conf I then tried doing a search using the index-time style index=iis host::pdwww1 which returned the logs as expected. So we can find the logs, but doing things like index=iis | stats count by host doesn't work correctly due to the missing host value at search time.
Seeing a lot of warning in splunkd.log "InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised" . Is there a way to get rid of this warnin... See more...
Seeing a lot of warning in splunkd.log "InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised" . Is there a way to get rid of this warnings? I see its generated from connectionpool.py script. Error is happening from - SA-ITOA/lib/SA_ITOA_app_common/solnlib/packages/requests/urllib3/connectionpool.py location. ITSI Version 4.2.1
Looking for some assistance extracting all of the nested json values like the "results", "tags" and "iocs" in the screenshot. I've been trying to get spath and mvexpand to work for days but apparent... See more...
Looking for some assistance extracting all of the nested json values like the "results", "tags" and "iocs" in the screenshot. I've been trying to get spath and mvexpand to work for days but apparently I am not doing something right. Any help is appreciated.
I had the following alerts after I restarted Splunk from the web interface. These alerts took place on May 5th and I haven't seen them come back. Failed to start KV Store process. See mongod.log a... See more...
I had the following alerts after I restarted Splunk from the web interface. These alerts took place on May 5th and I haven't seen them come back. Failed to start KV Store process. See mongod.log and splunkd.log for details. KV Store changed status to failed. KVStore process terminated.. KV Store process terminated abnormally (exit code 1, status exited with code 1). See mongod.log and splunkd.log for details. I checked the "/opt/splunk/var/lib/splunk/kvstore/mongo$" the permissions are set to "splunk: splunk" I see the "March 1 2017 splunk.key" should that be rotated or something? Should I restart Splunk and see if those errors come back?
Hello, I have 4 sources (source 1-4) , common field for source 1 to 3 is Properties.Id, source4 common field is Id, does not have properties., I need to join these 4 sources on field Properties.Id, ... See more...
Hello, I have 4 sources (source 1-4) , common field for source 1 to 3 is Properties.Id, source4 common field is Id, does not have properties., I need to join these 4 sources on field Properties.Id, since source 4 just has Id, I renamed this field to Properties.Id using eval, with eval change, the query below is not working, it is not doing the join correctly using field Properties.Id, any ideas what is the issue here ? Splunk query- sourcetype=source1 OR sourcetype=source2 OR sourcetype=source3 OR sourcetype=source4 | eval Properties.Id=if(sourcetype="source4",Id,null()) | stats values(Properties.Id) as Id by sourcetype |append [|makeresults |eval sourcetype=split("source1 ,source2 ,source3 ,source4" ,",") | mvexpand sourcetype | fields sourcetype] | fillnull value="Not exists" Id| chart count over Id by sourcetype |sort (Id )
Trying to create a scheduled report, to fire off a search and populate a summary index. Just want counts for each sourcetype into a new index called "index_summary". Planning to use this to track vol... See more...
Trying to create a scheduled report, to fire off a search and populate a summary index. Just want counts for each sourcetype into a new index called "index_summary". Planning to use this to track volume over time, changes in volume, possibly even do an alerrt if it changes by say 300% +/- etc. Trying |tstats count where index=* by date_hour index sourcetype but not really seeing the results as they will need to be. Looking to populate Date/Time (most likelys hourly), index, sourcetype, count. Any help would be appreciated.
Hi all, I have created a modular input using Splunk's Add-on builder (v3.0.1). The modular input is based on a Python script which polls our REST API to ingest data into Splunk. The Modular input t... See more...
Hi all, I have created a modular input using Splunk's Add-on builder (v3.0.1). The modular input is based on a Python script which polls our REST API to ingest data into Splunk. The Modular input takes three Data Input Parameters, a simple text-box, a password and a checkbox. What I want to do is, based on the data ingested by the script, disable or remove the checkbox input parameter so that user cannot edit it. Does Splunk modular inputs provide any such functionality to perform this? Any help towards pointing me in the right direction will be highly appreciated. Regards, Thanks
Dear All, I have created a new modular input using Splunk's Add-on builder (v3.0.1). The modular input is based on a Python script which polls our REST API to ingest data into Splunk. The Modular ... See more...
Dear All, I have created a new modular input using Splunk's Add-on builder (v3.0.1). The modular input is based on a Python script which polls our REST API to ingest data into Splunk. The Modular inputs takes three Data Input Parameters, a simple textbox, a password and a checkbox. I have observed that the checkbox parameter is not rendering properly when I configure the modular input from Splunk Settings -> Data Inputs. It shows the Checbox as a simple text field with default value of False (see image below): But when I configure the modular input from within the add-on, it properly shows the checkbox field as below: This is kind of a weird behavior. Is that the way Splunk is supposed to show the configuration in both cases or it is a bug? Please point me towards anything which can solve this. Let me know if you need any more information to debug this. Regards, Umair
Hello, I have a simple question, Does using indexer clustering affect Index Per Day license usage? for example, if I have a 100 GB/Day license and use 3 indexers with the replicate factor of 3, my ... See more...
Hello, I have a simple question, Does using indexer clustering affect Index Per Day license usage? for example, if I have a 100 GB/Day license and use 3 indexers with the replicate factor of 3, my license will be used 3 times or not?
Hello, I would like to extract data from inside a parenthesis to create a new field This command for a search works well: rex field=user_description "((?[^)]*)" But when a try to configure... See more...
Hello, I would like to extract data from inside a parenthesis to create a new field This command for a search works well: rex field=user_description "((?[^)]*)" But when a try to configure this inside a query of a dashboard it does not work i guess because some incomptability with xml The alternative is to extract field in the sourcetype but I am not able to obtain regular expression Could anyone provide the regex code Example of the data: {"userid": 1, "action": "development (project)", "user_description": " Michael Jordan (adm-Jordan)"} And I would like to obtain: adm-Jordan Please take into account that other fields can contain information between parenthesis but in my case I would like to obtain data inside parenthesis when first coincidence after user_description appears Many thanks a lot
Hello, Still checking Answers. Is it possible to use all of the parameters from an alert in a dashboard panel? Positive results from an alert set tokens that panels use to show the panel (depends)... See more...
Hello, Still checking Answers. Is it possible to use all of the parameters from an alert in a dashboard panel? Positive results from an alert set tokens that panels use to show the panel (depends). Our dashboard contains the search SPL from many alerts. When changes occur we have to edit in two places (alerts and XML code). We want to only edit the alert. As well as use other alert parameters; i.e.; cron schedule, throttling, etc. EDIT Here is one of the panel searches. <search> <query>index IN (catalina,solarisevents) AND source=/logs/access AND sourcetype=access_combined AND host=host3* AND method IN (GET,POST) (date_hour > 6 AND date_hour < 19) | eval certsFiled=case(file="confirm.jsp","1") | timechart count(method) AS Hits, count(certsFiled) AS Certs span=2min | eval ratio=Certs/Hits | where ratio < .01 <!--| where 1=1--></query> <earliest>-5min@min</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> <refresh>30sec</refresh> <refreshType>delay</refreshType> <progress> <condition match="'job.resultCount' > 0"> <set token="panel_show3">true</set> </condition> <condition> <unset token="panel_show3"></unset> </condition> </progress> </search> Here is the alert. Host3Count index IN (catalina,solarisevents) AND source=/logs/access AND sourcetype=access_combined AND host=host3* AND JAR3 | timechart count span=1m | delta count as dcount | eval prevCount = count-dcount | fields - dcount | search prevCount > 100 and count < 20 The alert is scheduled; runs on corn; has trigger conditions and actions. I want to reference the alert and all of the associated parameters (cron, etc.) from a dashboard panel. If the alert generates results, then that panel will be displayed (uses tokens and <panel depends="$panel_show3$">). Stay safe and healthy, you and yours. Thanks and God bless, Genesius
app/SplunkEnterpriseSecuritySuite/ess_notable_suppression_list I need to pull a report from the Notable Event Suppressions I am not sure how. Label Description Start Time Expiration Time S... See more...
app/SplunkEnterpriseSecuritySuite/ess_notable_suppression_list I need to pull a report from the Notable Event Suppressions I am not sure how. Label Description Start Time Expiration Time Status Thanks in advance.
My CSV has 98 rows and I want the search to return the rows from that csv if they are not in my index=gcp* what i have here is the opposite, it matches if its in index=gcp* -- i need to flip it ba... See more...
My CSV has 98 rows and I want the search to return the rows from that csv if they are not in my index=gcp* what i have here is the opposite, it matches if its in index=gcp* -- i need to flip it basically. index=gcp* | rename data.jsonPayload.rule_details.reference as FW | search FW = "network:prod*" | rex field=FW "network:prod-a/firewall:(?<fw>.*)" | rex field=FW "network:prod-b/firewall:(?<fw>.*)" | rex field=FW "network:prod-c/firewall:(?<fw>.*)" | rex field=FW "network:prod-d/firewall:(?<fw>.*)" | rex field=FW "network:prod-e/firewall:(?<fw>.*)" | lookup firewall-exception-prod-num.csv firewall_rule as fw OUTPUT firewall_rule as fw | dedup fw | table fw
I am having trouble charting some data by hour and consoleID. Below is the search I used. I can use the stats function to count by hour, but it doesn't show well in my dashboard. I am looking t... See more...
I am having trouble charting some data by hour and consoleID. Below is the search I used. I can use the stats function to count by hour, but it doesn't show well in my dashboard. I am looking to have this same format, but use the field date_hour in the chart count function. | chart count over pedestalName by date_hour, consoleID ? I know this doesn't work, but in my head this is what should work. index="sg_log" host=PACSTAPP1 "" "OUTGATE" "COMPLETE" "SMT" NOT "TROUBLE_LANE" | xmlkv | eval consoleID=if(consoleID="AUTO","AUTO","MANUAL") | chart count over pedestalName by consoleID | eval total=round(AUTO+MANUAL) | WHERE pedestalName IN ("21","22","23","24","25","26") | eval autogate%=round(AUTO/(AUTO+MANUAL)*100,2)
How do I get the Splunk AoB to use the checkpoint timestamp in the future URI requests? I'm trying to have a default start time and then have it incremented based on what it saw last. I end u... See more...
How do I get the Splunk AoB to use the checkpoint timestamp in the future URI requests? I'm trying to have a default start time and then have it incremented based on what it saw last. I end up with a inputs.conf.spec error when I attempt to use startTime in both REST URL parameters and in the checkpoint parameter name. Splunk complains about the checkpoint parameter name not being defined in inputs.conf.spec. Unable to initialize modular input "test_audit_log" defined inside the app "TA-test-audit-collector": Endpoint argument "audit_time_checkpoint" has not been defined in the inputs.conf.spec file. All args defined via introspection must also be defined in the spec file.
Hello our splunk universal forwarder only on our nessus instance is generating findings on port 8089. Our splunk doesn't use the universal forwarder's SSL (we implemented our own wrapper). So why is ... See more...
Hello our splunk universal forwarder only on our nessus instance is generating findings on port 8089. Our splunk doesn't use the universal forwarder's SSL (we implemented our own wrapper). So why is it trying to create a connection on 8089 (even though our firewall is blocking it). I'm required to scan my Splunk Enterprise environment for compliance reasons. When I'm scanning my search heads and indexers ,I keep getting multiple SSL errors for the management port 8089. I've searched and haven't found a way figure out a method to upload a third party cert to fix this or if this is something that I'll just have to make not isn't fixable. I've included some of the vulnerability issues I've found. Not sure if opening a ticket with support would get me the information I need. SSL Certificate with Wrong Hostname SSL Certificate Cannot Be Trusted SSL Self-Signed Certificate