All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Am currently researching how to use the Alerting Templates in AppD for heath rules and get an error stating: " Short name not available for the condition in critical criteria"  Or   "Short name not a... See more...
Am currently researching how to use the Alerting Templates in AppD for heath rules and get an error stating: " Short name not available for the condition in critical criteria"  Or   "Short name not available for the condition in warning criteria".  When I export and look further into the underlying json file, I find that there is a null for the shortName rules that are causing the error.  Some of the shortNames point to 'A' or 'B', which I am assuming line up with the criteria in the health rule conditions.  The critical or warning criteria that show as null in the json only seem to have 1 criteria  in the health rule.    Does anyone know how or why these got set to null and if I change them to 'A' instead of null, will that break anything?
This always feels exceptionally difficult to me, i'm not sure what i'm missing. I have a list of machines, a simple CSV with a Name, Category, and Tag.  Kind of like this: VM Category Tag V... See more...
This always feels exceptionally difficult to me, i'm not sure what i'm missing. I have a list of machines, a simple CSV with a Name, Category, and Tag.  Kind of like this: VM Category Tag VM1 Backup Friday VM1 Datacenter AWS VM2 Backup Monday VM2 Critical Yes VM2 Datacenter Azure VM3 Critical No VM3 Datacenter Azure   I want to find machines that do not have a backup Category, so in this example it would be VM3. I've written a search to give me all Machines, and another one to give me all machines with backups.  I've written this: index=vcenter source=*tag* | dedup VM | fields VM | search VM NOT [search index=vcenter source=*tag* Category=Backup* | dedup VM | fields VM] | table VM I get some results but not all.  
Greetings all. I am having some trouble getting syslog data to filter with regards to nullQueue. Below are what my config files look like and some additional troubleshooting I've taken so far. input... See more...
Greetings all. I am having some trouble getting syslog data to filter with regards to nullQueue. Below are what my config files look like and some additional troubleshooting I've taken so far. inputs.conf [udp://5514] connection_host = ip index = org_index sourcetype = cisco:firepower:syslog   props.conf [source::udp:5514] TRANSFORMS-set= setnull,setparsing   transforms.conf [setnull] REGEX = . DEST_KEY = queue FORMAT = nullQueue [setparsing] REGEX = \%FTD-\d-(430002|430003) DEST_KEY = queue FORMAT = indexQueue   My environment flows Firepower syslog > Heavy Fwd (on prem) > Splunk Cloud and the above configs are on the Heavy Fwd. In the syslog stream, there are "Correlation Event" logs I am trying to drop. Above I am trying to ingest only message ID's of 430002 and 430003 and drop everything else, however, Correlation Events are still coming in. I've also tried the alternative where I'm targeting Correlation Event in the regex in an effort to drop them, also unsuccessful. Below are some additional troubleshooting steps I've taken. reformatting the source as [source::udp://5514] in props.conf removed spaces before and after the equals signs for each attribute removed the line "REGEX = ." in transforms.conf tried the sourcetype in props.conf, so [cisco:firepower:syslog] tried different regex matches, using regex101 restarting Heavy Fwd after each change Here is the Splunk Doc I've been working with primarily: https://docs.splunk.com/Documentation/Splunk/8.1.0/Forwarding/Routeandfilterdatad#Discard_specific_events_and_keep_the_rest Any help is greatly appreciated!
I've figured out how to display a single Report in a row/panel based on a users dropdown choice   <row> <panel> <title>Vuln Totals - Last 30 Days</title> <input type="dropdown" searchWhenChanged="t... See more...
I've figured out how to display a single Report in a row/panel based on a users dropdown choice   <row> <panel> <title>Vuln Totals - Last 30 Days</title> <input type="dropdown" searchWhenChanged="true" token="field1"> <label>Machine Classification</label> <choice value="WorkStation_Report_1">Workstations</choice> <choice value="DataCenter_Report_1">DataCenter</choice> <choice value="DMZ_Report_1">DMZ</choice> <default>WorkStation_Report_1</default> </input> <chart> <search ref="$field1$"></search> </chart> </panel> </row>   However, I'd like to take the users choice/token and use it to display 3 Reports side by side (instead of just one).   <row> <panel> <table> <title>Top 5 Vulns</title> <search ref="WorkStation_Report_1"></search> </table> </panel> <panel> <table> <title>Top 5 IPs</title> <search ref="WorkStation_Report_2"></search> </table> </panel> <panel> <table> <title>Top 5 Overdue</title> <search ref="WorkStation_Report_3"></search> </table> </panel> </row>     Does anyone know if this is possible or more importantly, how to achieve it? Thanks!
Hello all!   I'm currently implementing Splunk inside one of our company systems. It happens so that the logging structure works like this: C:\Systems\System\Logs\A_Lot_Of_Folders\2020(year)\11(m... See more...
Hello all!   I'm currently implementing Splunk inside one of our company systems. It happens so that the logging structure works like this: C:\Systems\System\Logs\A_Lot_Of_Folders\2020(year)\11(month)\day.txt Since I have a lot of folders inside the Logs structure, I configured my stanza like this: [monitor://C:\Systems\System\Logs\*] index = MyIndex disabled = 0 _TCP_ROUTING = my_config I have also tried: [monitor://C:\Systems\System\Logs] index = MyIndex disabled = 0 _TCP_ROUTING = my_config But my Universal Forwarder won't look up inside the folders that I have inside the Logs directory. Question 1: Is there a way to config a "global stanza setting" so the Universal Forwarder will look for every new event inside all of the folders or I will have to work with the hard way, configuring each and every folder with a new monitor stanza? Question 2: Is there a way to automate whenever we turn to the next month or next year so I won't have to go back and configure all the stanzas with the current year/month that we are? In terms of troubleshooting, I've already restarted the service and I have connectivity with the Splunk destination. Thank you in advance!
from the table output, i want to rename row values for few fields, say for eg: Column 1 Column 2 1 AAA 2 C 3 D 4 MMM 5 MMM 6 DDD   ... See more...
from the table output, i want to rename row values for few fields, say for eg: Column 1 Column 2 1 AAA 2 C 3 D 4 MMM 5 MMM 6 DDD   I want the result to look like below: Coulmn 1 Column 2 1 Apple 2 Carrot 3 Drumstick 4 Mango 5 Mango 6 Drumstick   Basically, I have a list for mapping, Any letter begins with A to be renamed as Apple, and the ones with D to be renamed as Drumstick, and so on. Can someone please help me? I am quite new to Splunk. Thanks in advance.
I need to integrate workday logs in a Splunk cloud environment and I know the app does not currently support the workday day add on for Splunk.  How else would I go about doing this integration? CSV?
I have a search/dash board that will show data over the last 30 days, the search is as followed  index=server EventCode=5829 | stats count by ComputerName, Domain, Machine_SamAccountName, Machine_O... See more...
I have a search/dash board that will show data over the last 30 days, the search is as followed  index=server EventCode=5829 | stats count by ComputerName, Domain, Machine_SamAccountName, Machine_Operating_System this search will give me roughly 70~ events a month, the problem is now my customer would like to have the time stamp put on the table, (which is easy by putting just the "_time" field in there) my problem that I am running into is that it will give me 10,000 results now! I showed the customer this and they said it would be fine for me to show just the latest event in the last 30 days for each Machine_SamAccountName.    My question is how???? 
Hello,   I am trying to create a drill down dashboard. Basically I want to pass a subnet value (which is currently represented as a string) from one pane to another search pane and search on data i... See more...
Hello,   I am trying to create a drill down dashboard. Basically I want to pass a subnet value (which is currently represented as a string) from one pane to another search pane and search on data in that query using the subnet value. I have tried several methods and none seem to work.   Ideally I thought something like the below would work in pseudo code.   Search ... | eval subnet_value=$subnet value from first query$ | where ipv4=subnet_value   However, I get back no results. Perhaps someone has some tips on some things I can try next as I am stuck currently.   Thanks in advance, Joe
I can able to create Service Now tickets from Splunk.  In the email alert i receive Affected computer, UPN, Event title ,File name. I want to capture same details in the service now ticket. But in ... See more...
I can able to create Service Now tickets from Splunk.  In the email alert i receive Affected computer, UPN, Event title ,File name. I want to capture same details in the service now ticket. But in service now tickets, i don't have any informations. Its blank with no info. Please help me how to pass this info from splunk. Do i want to do any changes in setting -> Field -> Workflow? What changes i want to do. Please help me  Email Alert Received Service Now Ticket with no Info Servic now alert setting   @isoutamo @ITWhisperer  @gcusello @thambisetty @bowesmana @DalJeanis     
Hi, My requirement is to check all the checkboxes based on the results obtained from the query. Example: When my query runs it gives the below results of sourcetype. They are displayed as checkbox ... See more...
Hi, My requirement is to check all the checkboxes based on the results obtained from the query. Example: When my query runs it gives the below results of sourcetype. They are displayed as checkbox options. Only first sourcetype "HR" is checked and others are unchecked. I want my query all results to be checked which i am failing to do so.  sourcetype HR LP MN I am using the below code <input type="checkbox" id="check1" token="environment" searchWhenChanged="true"> <label>Environments</label> <search> <query>index = main sourcetype = landscape earliest=-48h |dedup sourcetype | table sourcetype</query> </search> <change> <condition value="HR"> <set token="HR">HR</set> </condition> <condition> <unset token="HR"></unset> </condition> </change> <default>*</default> <initialValue>*</initialValue> <delimiter> </delimiter> <fieldForLabel>sourcetype</fieldForLabel> <fieldForValue>sourcetype</fieldForValue> </input> any suggestions on how i can do this ?
Hi, Can someone suggest me on how to enable drilldown for specific column . For example ,if i have 5 columns and i have to enable drilldown on 3rd column how this can be achieved 
We have a script which is downloading file from the location every  5 min and we are monitoring using batch stanza. Each file is having size approx 8GB.  There are ingestion delay in Splunk and we wo... See more...
We have a script which is downloading file from the location every  5 min and we are monitoring using batch stanza. Each file is having size approx 8GB.  There are ingestion delay in Splunk and we would like to improve the ingestion for batch stanza. Is there any settings available for batch stanza which can help us to resolve the data delay in Splunk?
Hi, We have Distributed Environment and have 4 Search Heads Do we need to place inputs.conf of the website monitoring app in all the 4 Search Head Instance in the distributed Environment? Or is it ... See more...
Hi, We have Distributed Environment and have 4 Search Heads Do we need to place inputs.conf of the website monitoring app in all the 4 Search Head Instance in the distributed Environment? Or is it enough if we can place inputs.conf in any of the one Search Head?
No matter which rule I'm adding, I always receive the following error message: Error in 'itsirulesengine' command: Invalid message received from external search command during setup, see search.log.
Hi, I really like the new dashboard beta app. My question - lets say i have a dashboard that i have created inside the app(json), can i use the code in this dashboard inside the regular splunk envi... See more...
Hi, I really like the new dashboard beta app. My question - lets say i have a dashboard that i have created inside the app(json), can i use the code in this dashboard inside the regular splunk environment? can i create a costum app that will "call" the dashboard beta and present it? (without being inside the app itslef). Thanks.
Hi, I have a Splunk DB Connect v.2.4.1 upon client platform and I must implement a SQL query to import specific data from Client database to Splunk. In the subquery previous the final select, I hav... See more...
Hi, I have a Splunk DB Connect v.2.4.1 upon client platform and I must implement a SQL query to import specific data from Client database to Splunk. In the subquery previous the final select, I have this construct "Sum(abs(table_column_name)) AS .... " but the launched query give me an error as result. Is a problem related to App in general, or to App version, or to SQL construct? in alternative, would be better to realize the SQL construct once imported the data in Splunk? Thanks in advance. AC 
Hi, I have several data sources that have each their own timestamp(different times, one format) due to Geo differences, however according to source_type time settings, they should all be indexed acc... See more...
Hi, I have several data sources that have each their own timestamp(different times, one format) due to Geo differences, however according to source_type time settings, they should all be indexed according to the time of the enterprise. That works like a charm most of the times, however every now and then, SPLUNk decides to change the year of my events from 2020 to 2019, or some other time change. This does not happen to all indexes, For example , I read the _audit for fired alerts, which are then dashboareded correctly, however, the events themselves are nowhere to be found, unless I decide to change the search to 2019. So far I can not relate such situations to anything out of normal. Any ideas? kind regards!  
I'm working with a system where each event has its own creation timestamp (always the same) and modification timestamp. Currently, I want to get the most recent resolved event to establish the lates... See more...
I'm working with a system where each event has its own creation timestamp (always the same) and modification timestamp. Currently, I want to get the most recent resolved event to establish the latest modification time and then get the earliest resolved event in order to establish the actual resolve time which I've achieved with the following search query:   index=<index name> | where match(status, "resolved") | dedup 1 incident_id sortby -_time | table incident_id, status, creation_time, resolve_time, modification_time | appendcols [ search index=<index name> | where match(status, "resolved") | dedup 1 incident_id sortby +_time | eval resolve_time = modification_time | reverse | table resolve_time ] | eval modification_time = if(resolve_time == modification_time, "Unchanged", modification_time) | table incident_id, status, creation_time, resolve_time, modification_time   It can be hard to visualise this so I've illustrated it below: From the above screenshot, you can see that the incident with ID 1735 was initially resolved on 2020/11/06 at 11:56:27 but was modified twice, once at 11:57:54 and again at 15:02:42. When the search only includes one post-resolution modification, this is exactly what's reported which can be seen in the following screenshot However, the strange thing is that, when there's a second post-resolution modification, the timestamps get skewed into the future - resolve_time is supposed to remain as 2020/11/06 11:56:27.351 but gets changed to 2020/11/06 14:52:13.822. This can be seen in the following screenshot: Through testing, I have: Verified that the two searches work exactly as intended when copy and pasted into standalone searches. Found that all timestamps (_time, creation_time, and modification_time) within the appendcols subsearch are skewed. Even more bizarrely, the timestamps that are outputted aren't mentioned anywhere else. It's almost as if they're a result of some kind of search-time calculation. Why is this happening?
I've been trying to extract fields from a log at search time with only the help of props.conf. in the spunk docu I read that EXTRACT would be good in that case, especially cause I try to extract mult... See more...
I've been trying to extract fields from a log at search time with only the help of props.conf. in the spunk docu I read that EXTRACT would be good in that case, especially cause I try to extract multiple fields at once.    this is my EXTRACT in props.conf so far:     EXTRACT-fields = (?P<src_ip>\d*\.\d*\.\d*\.\d*)(?=\ \d* TCP_)\s(?P<bits>\d*)\s(?P<tcp_state>\w*_\w*)\s     this would go on for a while with different fields but it doesn't work. what do I do wrong?  this is how the log looks like for example:      2014-03-27 12:39:32 20 10.71.15.207 304 TCP_HIT 367 1470 GET http www.computerworld.com 80 /elqNow/elqFCS.js - - - - 23.196.74.53 application/x-javascript http://www.computerworld.com/s/article/9247206/Gameover_malware_takes_aim_at_Monster.com_and_CareerBuilder.com "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0)" OBSERVED "Technology/Internet" - 163.252.254.201 23.44.202.53 52809       Thanks a lot for any help!