All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hallo everyone! I started to work with Splunk 2 mounths ago. I don't know where I can start to look for information, how to build a query and dashboard (flow map). Do you have any ideia? Greetings
using: Splunk Add-on for Microsoft Window 8.5.0 We have created report listing users that a part of specific groups using this logic | inputlookup AD_Obj_User | lookup AD_Obj_Group member AS dn ... See more...
using: Splunk Add-on for Microsoft Window 8.5.0 We have created report listing users that a part of specific groups using this logic | inputlookup AD_Obj_User | lookup AD_Obj_Group member AS dn we noticed users disappearing from these reports when u user was moved to another ou. This is what we see happening when we move a user to another group: 1: in the AD_Obj_User lookup the dn changes to cn=username, ou=NewGroup, ...... 2: in the AD_Obj_Group lookup in the member files the user dn does not change, but still looks like cn=username, ou=NewGroup, ...... Because the dn of the user and the dn in the member field are now different the user disappears from the report. As part of our debugging Efford we tried updating another property of the group (description) an after this also the member field in AD_Obj_Group is updated, and the user is back up the report again. This looks like a bug to me, but maybe I'm missing something. Is anyone able to solve this mystery?
Hello I would like AWS cloudtrail logs "Host" field to be the Account ID per each log (we have multiple AWS accounts). The current value is "$decideOnStartup". We are using SQS-based S3 to read a... See more...
Hello I would like AWS cloudtrail logs "Host" field to be the Account ID per each log (we have multiple AWS accounts). The current value is "$decideOnStartup". We are using SQS-based S3 to read a bucket containing CloudTrail from several accounts. Is there any way to do it? Thank you
Hi I have read all the HEC Splunk documentations but there is some things that are not clear for me I know the process to create a new token Log on your Splunk server. Go to Settings > Data... See more...
Hi I have read all the HEC Splunk documentations but there is some things that are not clear for me I know the process to create a new token Log on your Splunk server. Go to Settings > Data Inputs > HTTP Event Collector > Global Settings. Edit the Global Settings. Click the Enabled button for the All Tokens option. ... Go to Settings > Data Inputs. Click +Add New in the HTTP Event Collector row to create a new HEC token. So except if I am mistaken a new stanza is created in the inputs.conf file of the Heavy Forwarder? If yes, do we also have to update the output.conf file on the Heavy Forwarder to route the events to the indexers. Is there any other configurations to do? I have also understood how to test our HTTP Event Collector with the curl commandcurl -H "Authorization: Splunk 12345678-1234-1234-1234-1234567890AB" https://mysplunkserver.example.com:8088/services/collector/event -d '{"sourcetype": "my_sample_data", "event": "http auth ftw!"}'In this example does https://mysplunkserver.example.com:8088 correspond to he HEC endpoint? What I also do not understand is when the HEC cinguration works, how the events are automatically sent to the Splunk platform. Is there a scheduled task to do this? Finally, if somebody has interestiong tutorials on HEC topics (except tutorials Splunk), I will be very interested in Thanks
Why CIM is important? easy example
Hi extract the field sample data : "tag":AKAMAI/WAF/ Thanks..
I have a dashboard where I want to get the following features: 1. Drill down option i mentioned to "Link to search" but when i am clicking on the graph it is the search page is opening in same tab... See more...
I have a dashboard where I want to get the following features: 1. Drill down option i mentioned to "Link to search" but when i am clicking on the graph it is the search page is opening in same tab, but i want to open that in another tab. 2. I have another panel where the bar graph is showing by hosts, so i want to show up different colors for each host, how can i do this. 3. i want to display the values on the graph, it is displaying but it is overlapping, how can make them display clearly.
I've been tasked with improve existing/create new splunk dashboards using reactjs. I'm following https://splunkui.splunk.com/Create/Overview but have a major restriction that is giving me a headach... See more...
I've been tasked with improve existing/create new splunk dashboards using reactjs. I'm following https://splunkui.splunk.com/Create/Overview but have a major restriction that is giving me a headache...The restriction is that I cannot use npm/yarn/npx cmd. The team managing splunk's aws resources will not allow me to run those commands and therefore the entire app must be self contained almost like a lambda layer. I tried zipping the staging folder and creating new app with it, but that has failed. So how can I go about resolving this?
I've been wanting to build some integrity checking and other functionality based on knowing the fields in a sourcetype for a while now. At my company we've built a data dictionary of indexes and so... See more...
I've been wanting to build some integrity checking and other functionality based on knowing the fields in a sourcetype for a while now. At my company we've built a data dictionary of indexes and sourcetype of interest to the SOC. They can search the dictionary to help them remember the important data sources. I'd like to augment/use this info in a couple of new ways: 1) give them a field list for all of these sourcetypes so they could search for which sourcetypes have a relevant field (like src_ip) 2) I'd like to note the fields that appear in 100% of records for a sourcetype and then every day find out if is missing any of those fields. This would quickly clue me into data issues related to the event sent, parsing, or knowledge objects. I know how to get a list of fields for 1 sourcetype and store that info. And I know how to compare a sourcetype to a past set of fields to a current set. My challenge now is how do I get the list of fields for the 100 sourcetypes of interest so far my best idea is to create 100 jobs to handle each sourcetype. Something like ```1-get the sourcetypes of interest and pull back data for them``` [| inputlookup dataDictionary.csv where imf_critical=true | eval yesterday=relative_time(now(),"-1d@d") | where evalTS>yesterday | dedup sourcetype | sort sourcetype | head 5 | tail 1 | table sourcetype] earliest=-2d@d latest=-1@d ```2-get samples for all indexes in which the sourcetype appears``` | dedup 10 index sourcetype | fieldsummary ```3-determine field coverage so we can pick the hallmark fields``` | eventstats max(count) as maxCount | eval pctCov=round(count/maxCount,2)*100 | table field pctCov ```4-add back in the sourcetype name``` | append [| inputlookup dataDictionary.csv where imf_critical=true | eval yesterday=relative_time(now(),"-1d@d") | where evalTS>yesterday | dedup sourcetype | sort sourcetype | head 5 | tail 1 | table sourcetype] | eventstats first(sourcetype) as sourcetype | eval evalTS=now() | table sourcetype evalTS field pctCov ```5-collect the fields to a summary index daily``` | collect index=soc_summary marker="sumType=dataInfo, sumSubtype=stFields" If I ran 100 jobs like this, the number after head would increment to give me the next sourcetype. But I feel like there has to be a better way to do fieldsummary on a lot of sourcetypes. Any ideas?
Hi to all, it's possible to invert y1 and y2 axis? Second question, if y1 axis show a percentage value and y2 show count value, it's possible add symbol "%" to y1? Thanks to all!
Hi Am trying to create an alert and a weekly scheduled report for user"us.admin" in Splunk. I want to get an alert if this user login and activities if possible. Am already monitoring the path and ... See more...
Hi Am trying to create an alert and a weekly scheduled report for user"us.admin" in Splunk. I want to get an alert if this user login and activities if possible. Am already monitoring the path and pushing into Splunk. What are the appropriate search strings to do this? Thanks
Hi Splunkers ! Is it a way to automatically retrieve Entity information like OS, IP address, OS version, ... and add it as Dimensions ? All my Entities are retrieved via Splunk Addon For *nix and v... See more...
Hi Splunkers ! Is it a way to automatically retrieve Entity information like OS, IP address, OS version, ... and add it as Dimensions ? All my Entities are retrieved via Splunk Addon For *nix and via Splunk Addon For Windows. All my Entities are correctly imported to ITEW but no one has other information like his IP or his OS in Entities Info Fields. I have about 1200 entities so, I'm looking for a way to add those information to all my entities automatically. All needed data are correctly indexed. My save search ITSI Import Objects - TA *Nix and ITSI Import Objects - Perfmon get those info correctly. Can somebody help me with these issue ? Happy Splunking !
I am trying to use Splunk Dashboard Studio, I have a search for a single value viz: | makeresults | eval Date=strftime(now(),"%Y-%m-%d %H:%M:%S") | table Date | rename Date AS UTC-DateTime The ... See more...
I am trying to use Splunk Dashboard Studio, I have a search for a single value viz: | makeresults | eval Date=strftime(now(),"%Y-%m-%d %H:%M:%S") | table Date | rename Date AS UTC-DateTime The single value viz always returns the "time" in this format "2022-12-02T20:39:21", ignoring the format strftime in my search. I can apply a format to a table column no problem. How can I format the value in the single value viz as "2022-12-02 20:39:21" and... how can I modify or refresh the query that gets the time every second? I saw a youtube tutorial on this, but the author did not explain the query or the process to refresh or how to apply a different format to the value. Please advise, thanks, eholz1
We are looking to see the size of all the fields in a particular index. We have come up with this search to see the size of a particular field but we would like to see the size of all the fields in t... See more...
We are looking to see the size of all the fields in a particular index. We have come up with this search to see the size of a particular field but we would like to see the size of all the fields in the index in order to understand where the bulk of the data is sitting. index=index_name | eval raw_len=(len(_raw)/1024/1024/1024) | stats sum(raw_len) as GB by field_name | sort -GB
I want to change the column cell background based on the value, but I also want to use a wild card. Example Field values Passed (12:20) Failure (2:30) Passed (4:40) I want to change the cell col... See more...
I want to change the column cell background based on the value, but I also want to use a wild card. Example Field values Passed (12:20) Failure (2:30) Passed (4:40) I want to change the cell color based on only Passed and Failure and ignore rest of the string.
My query: index=primary eventType=ConnectionTest msg="network check results" | spath output=connectError details.error.connectionError | fillnull value=false connectError | dedup visitId | stats co... See more...
My query: index=primary eventType=ConnectionTest msg="network check results" | spath output=connectError details.error.connectionError | fillnull value=false connectError | dedup visitId | stats count as total, count(eval(connectError==true)) as errors If I run this, "errors" always returns 0. However, if I run index=primary eventType=ConnectionTest msg="network check results" | spath output=connectError details.error.connectionError | fillnull value=false connectError | dedup visitId | stats count by connectError connectError properly returns the set of values in each bucket of connectError. My dataset will sometimes contain the object "details.error". I tried fillnull to resolve this but that didn't work. If I look at the Events data for the first or second query, I do see "connectError" in the "Interesting Fields" list on the left hand side. How do I get the first query to work whereby I can get errors and total errors? I want to follow it up with |eval percentErrors=errors/total but I first need to get the stats to work properly.
Hi, I want to index simple xml file. <?xml version="1.0" encoding="utf-8"?> <unitData xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xsi:noNames... See more...
Hi, I want to index simple xml file. <?xml version="1.0" encoding="utf-8"?> <unitData xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xsi:noNamespaceSchemaLocation="unitData-1.0.xsd" unit="0000006000" equipment="W052A-22G0014" operator="admin" starttime="2022-11-22T06:10:53+01:00" endtime="2022-11-22T06:15:07+01:00" state="ok"> </unitData> Before indexing I would like to create new additional attribute machine which should have value depended of these conditions: case equipment="W052A-22G0014" machine =machine1 case equipment="W052A-22G0013" machine =machine2 Can anybody help, please?
What is the query to setup a report to log all activity from a user? Basically anytime they access the VPN and log into the Network, and all activity they are doing.
We've got Splunk_TA_Windows installed on a number of our servers sending data to our Splunk Cloud instance. However, there is far too much WinEventLog data being sent, pushing us to the limits of our... See more...
We've got Splunk_TA_Windows installed on a number of our servers sending data to our Splunk Cloud instance. However, there is far too much WinEventLog data being sent, pushing us to the limits of our ingest volume. What are best practices to lower this volume. I've already updated the props.conf file with the recommendations from the app installation and we've made adjustments to winnetmon to lower that volume. Are there any other best practices out there? We don't want to just disable it entirely.
Dear all, I have the use case that my splunk universal forwarder does not continuously monitor my logs. Because of this nature, I am using batch mode to have the files deleted after ingestion. Now... See more...
Dear all, I have the use case that my splunk universal forwarder does not continuously monitor my logs. Because of this nature, I am using batch mode to have the files deleted after ingestion. Now, I occasionally receive log files which I have already received at an earlier point in time. Problem is: The features crcSalt, initCrcLength etc. are only available in monitor mode. This means that I am not able to benefit from splunks features to prevent duplicate ingestion of the same data. Any help on a solution for this is greatly appreciated.