Hello, I try to learn splunk and thatfor I have setup a demo-version in my home-lab on the Linux system... Actually I have splunk running and I added the local files. Then I activated port 9997 and ...
See more...
Hello, I try to learn splunk and thatfor I have setup a demo-version in my home-lab on the Linux system... Actually I have splunk running and I added the local files. Then I activated port 9997 and installed a universal forwarder on my Windows 10 PC. I can see on Linux with tcpdump that I get packages on port 9997 but I can't get the data into splunk! When I try to add data from a forwarder manually then I see the message that I have actually not forwarders configured... What am I doing wrong?
A custom JavaScript error caused an issue loading your dashboard. I'm experiencing this error on both the palo alto networks app and add-on app; I'm unsure why reporting is no longer ingesting data....
See more...
A custom JavaScript error caused an issue loading your dashboard. I'm experiencing this error on both the palo alto networks app and add-on app; I'm unsure why reporting is no longer ingesting data. Thanks
I need to run a daily ldap search that will grab only the accounts that have change in the last 2 days. I can hard code a data into the whenChanged attribute. | ldapsearch search="(&(ob...
See more...
I need to run a daily ldap search that will grab only the accounts that have change in the last 2 days. I can hard code a data into the whenChanged attribute. | ldapsearch search="(&(objectClass=user)(whenChanged>=20230817202220.0Z)(!(objectClass=computer)))"
|table cn whenChanged whenCreated I am trying to make whenChanged into a last 2 days variable that will work with ldap search. I can create a whenChanged using: |makeresults |eval whenChanged=strftime(relative_time(now(),"-2d@d"),"%Y%m%d%H%M%S.0Z")|fields - _time I could use the help getting that dynamic value into the ldap search so that I am looking for the >= values of whenChanged
In Step 2 "Add the Dataset" of "Create Anomaly Job" within the Splunk App for Anomaly Detection, when running the following SPL, we get the warning- index=wineventlog_security
| timechart ...
See more...
In Step 2 "Add the Dataset" of "Create Anomaly Job" within the Splunk App for Anomaly Detection, when running the following SPL, we get the warning- index=wineventlog_security
| timechart count
"Could not load lookup=LOOKUP-HTTP_STATUS No matching fields exist." What can it be? We use the following versions - Splunk App for Anomaly Detection - 1.1.0 Python for Scientific Computing - 4.1.2 Splunk Machine Learning Toolkit - 5.4.0
Good afternoon, I am receiving a number of events in splunk soar from splunk, I have a playbook that is executed for each event, however I am wondering if the execution of the playbook in each event ...
See more...
Good afternoon, I am receiving a number of events in splunk soar from splunk, I have a playbook that is executed for each event, however I am wondering if the execution of the playbook in each event is in sequence or if it executes simultaneously in each event. I need that when receiving 3 events, the playbook is executed first in 1, then in 2 and finally in three, and from what I've seen soar executes the playbook in disorder for example 3, 1, 2. I would appreciate if anyone has any information on this.
Interested in getting live help from a Splunk expert? Register here for our upcoming session on Splunk IT Service Intelligence (ITSI) on Wed, September 13, 2023 at 1pm PT / 4pm ET. This is your opp...
See more...
Interested in getting live help from a Splunk expert? Register here for our upcoming session on Splunk IT Service Intelligence (ITSI) on Wed, September 13, 2023 at 1pm PT / 4pm ET. This is your opportunity to ask questions related to your specific ITSI challenge or use case, including: ITSI installation and troubleshooting, including Splunk Content Packs Implementing ITSI use cases and procedures How to organize and correlate events Using machine learning for predictive alerting How to maintain accurate & up-to-date service maps Creating ITSI Glass Tables, leveraging performance dashboards (e.g., Episode Review), and anything else you’d like to learn! Check out Community Office Hours for a list of all upcoming sessions. Join the #office-hours user Slack channel to ask questions and join the fun (request access here).
I am trying to filter multiple values from two fields but not getting the expected result. index=test_01 EventCode=4670 NOT (Field 1 = value1 OR Field 1 = value2) NOT (Process_Name = value 3 OR Proc...
See more...
I am trying to filter multiple values from two fields but not getting the expected result. index=test_01 EventCode=4670 NOT (Field 1 = value1 OR Field 1 = value2) NOT (Process_Name = value 3 OR Process_Name = value 4) I am geting splunk results which includes Process_Name=value 3 and Process_Name=value 4
Howdy Splunkers, Working on my Splunk deployment and ran into a funky issue. I am ingesting Palo Alto FW and Meraki network device logs via syslog server. Rsyslog is set to write logs down to a f...
See more...
Howdy Splunkers, Working on my Splunk deployment and ran into a funky issue. I am ingesting Palo Alto FW and Meraki network device logs via syslog server. Rsyslog is set to write logs down to a file and the UF is set to monitor the directories. No issues there, however I do run into an issue why I try to source type or set an index for these logs. I have edited the indexes.conf in the local folder on my cluster manager and pushed the required indexes to my indexers. When I go to search for the logs on my search head I cannot find any data. However it works properly whenever i do not have sourcetyping and index destination in my inputs.conf. Any idea as to why?
test_id": "CHICKEN-0123456", "last_test_date": "2023-09-04 12:34:00"
with such above file and todays date 09/25/2023
once it is monitored by the splunk, I cannot search this data with th...
See more...
test_id": "CHICKEN-0123456", "last_test_date": "2023-09-04 12:34:00"
with such above file and todays date 09/25/2023
once it is monitored by the splunk, I cannot search this data with the 'current' date or even current time; 15 or 60mintues.
instead it tends to read the dates off of the file which is the 'last test date' = 09/24/2023 therefore from the search I have to put either on that day or 1day to find the data.
Props.conf currently set as
DATETIME_CONFIG = CURRENT
I want the file to be 'read' today if it was uploaded today. (or 15 min if it was uploaded within 15min) NOT going off of the date in the file.
Gurus hop in plesae.
Hi All,
I am looking for a SPL query to generate the SLA metrics KPI dashboard for incidents in Splunk Mission Control. The dashboard should contain SLA status (met/not-met) and the Analyst assigne...
See more...
Hi All,
I am looking for a SPL query to generate the SLA metrics KPI dashboard for incidents in Splunk Mission Control. The dashboard should contain SLA status (met/not-met) and the Analyst assigned to the incident.
Thank You
Hello, Does "WHERE" SQL clause have the same row limitation as "INNER JOIN"? Does "WHERE" and "INNER JOIN" have the same function and result? Thank you for your help For example: | dbxquery co...
See more...
Hello, Does "WHERE" SQL clause have the same row limitation as "INNER JOIN"? Does "WHERE" and "INNER JOIN" have the same function and result? Thank you for your help For example: | dbxquery connection=DBtest query="SELECT a.name, b.department FROM tableEmployee a INNER JOIN tableCompany b ON a.id = b.emp_id | dbxquery connection=DBtest query="SELECT a.name, b.department FROM tableEmployee a, tableCompany b WHERE a.id = b.emp_id
Hi,
I'm trying to create a filter based on a threshold value that is unique for some objects and fixed for the others.
index=main | loopup thresholds_table.csv object output threshold | ...
See more...
Hi,
I'm trying to create a filter based on a threshold value that is unique for some objects and fixed for the others.
index=main | loopup thresholds_table.csv object output threshold | where number > threshold
The lookup contains something like:
object
threshold
chair
20
pencil
40
The problem here is that no all objects are inside the lookup, so I want to fix a threshold number for all other objects, for example I want to fix a threshold of 10 for every object except for those inside the lookup.
I tried these things without success:
index=main | loopup thresholds_table.csv object output threshold | eval threshold = coalesce(threshold, 10) | where number > threshold
index=main | fillnull value=10 threshold | loopup thresholds_table.csv object output threshold | where number > threshold
index=main | eval threshold = 10 | loopup thresholds_table.csv object output threshold | where number > threshold
The objective is identify when an object reach an X average value, except for those objects that have a higher average value.
I am trying to create a timeline dashboard that shows the number of events for a specific user over the last 7 days (x-axis being _time and y-axis being the number of events). We do not have a field ...
See more...
I am trying to create a timeline dashboard that shows the number of events for a specific user over the last 7 days (x-axis being _time and y-axis being the number of events). We do not have a field option for individual users yet. The syntax I have here will show a nice timeline from Search in Splunk but when I try to create a dashboard line chart for it, I either get nothing or mismatching info. Syntax I use for search: index="myindex1" OSPath="C:\\Users\\Snyder\\*".
Hi, we are using Splunk ES with notable events and suppressions. For sake of completeness, we have alerts that produce notable and some of these notable can be suppressed (through Splunk ES). So, in...
See more...
Hi, we are using Splunk ES with notable events and suppressions. For sake of completeness, we have alerts that produce notable and some of these notable can be suppressed (through Splunk ES). So, in the "Incident Review" section we are able to see all the notables for which there are no suppressions. We are trying to send the same set (i.e. all the notables for which there are no suppressions). We tried to add the action "send to soar" in one of the alerts that produce notables but in this way we obtain that all the notables (even the one suppressed) arrive on the soar. Do you know if there is a native feature (or quick way) to send all the notables for which there are no suppressions from Splunk to Splunk SOAR? Thank you in advance.
I'm totally and utterly new to splunk. Just ran the dockerhub sample, and followed the instructions: https://hub.docker.com/r/splunk/splunk/
I opened the search tab and most search commands seem to...
See more...
I'm totally and utterly new to splunk. Just ran the dockerhub sample, and followed the instructions: https://hub.docker.com/r/splunk/splunk/
I opened the search tab and most search commands seem to work fine. For example, the following command:
| from datamodel:"internal_server.server"
| stats count
Returns a count of 33350.
While this command:
| tstats count from datamodel:"internal_server.server"
as well as this one:
| tstats count
both return zero.
How can I get tstats working in this docker env with the sample datasets?
I want to use free cloud trial, I have done everything but my access instance option is not enabling, what should I do, Pls refer below screenshot and help me. Thank you. @suyogpk_11
Hi,
while importing custom modules (e.g. `from logger import Logger`) in the splunkd.log we are able to see `ModuleNotFoundError: No module named 'logger'` error and this is generated by this fi...
See more...
Hi,
while importing custom modules (e.g. `from logger import Logger`) in the splunkd.log we are able to see `ModuleNotFoundError: No module named 'logger'` error and this is generated by this file "/opt/splunk/lib/python3.7/site-packages/splunk/persistconn/appserver.py" .We suspect somehow library is not able to identify internal modules and hence throwing error.
We are also able to see warning `DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses` in same log file.
This issue seems to be on splunk version 9.0.x. For splunk version 8.2.x it is working fine. As we have noticed main difference in these 2 versions is python 2.7 support is removed from version 9.0.x.
We will like to know the possible solution in solving this error.
Hi,
I'm trying to create a table that contains a list of tasks. The list is static and stored in a lookup table called tasks.csv.
So far I have the following search:
index=one OR index=...
See more...
Hi,
I'm trying to create a table that contains a list of tasks. The list is static and stored in a lookup table called tasks.csv.
So far I have the following search:
index=one OR index=two | rex field=_raw "^[0-9-:.\s]*\w+\s+(?<task>.*)" | stats count by task
| search [|inputlookup tasks.csv ]
This creates me a table that looks like this:
task
count
task_a
1
task_b
1
task_c
1
task_d
1
However, if a task in my static list does not appear in the search results, it does not show in the table. I want the table to contain the whole list of tasks, regardless of whether they appear in the search results or not.
i.e.
task
count
task_a
1
task_b
1
task_c
1
task_d
1
task_e
0
task_f
0
Any ideas on how I can do this?
The closest I've got is using a join.. which does work, but does not allow for a wildcard, meaning I'd need to specify the whole 'task'.
|inputlookup tasks.csv
| join type=left task [ | search index=one OR index=two | rex field=_raw "^[0-9-:.\s]*\w+\s+(?<task>.*)" | stats count by task]
| fillnull value=0 task
| table task count
Would appreciate any thoughts or suggestions. Thanks in advance.
I have a Splunk universal forwarder installed. The Splunk Enterprise is seeing the forwarder, now I want to send network firewall logs to host forwarder to be sent to Enterprise platform.
I'm trying to produce an architecture diagram of our Splunk environment and I want to know what each of our universal forwarders and heavy forwarders are ingesting and sending. I'm looking in inputs ...
See more...
I'm trying to produce an architecture diagram of our Splunk environment and I want to know what each of our universal forwarders and heavy forwarders are ingesting and sending. I'm looking in inputs and outputs.conf but they are of no use. Is there a way to view what each forwarder is ingesting and sending, whether that be via the command line or in Splunk itself?