All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I got a new requirement to build a dashboard showing server status (Up/Down). Unfortunately our logs does n't indicate any such status like server started or server down. Any suggestions please? ... See more...
Hi, I got a new requirement to build a dashboard showing server status (Up/Down). Unfortunately our logs does n't indicate any such status like server started or server down. Any suggestions please? Any examples that I can see?
After upgrading FortiAnalyzer (FAZ) to 6.2.3, I'm seeing Splunk timestamping issues from the FortiGate (FGT) logs it forwards to Splunk. To reiterate, FGT logs are sent to FAZ, then FAZ forwards thos... See more...
After upgrading FortiAnalyzer (FAZ) to 6.2.3, I'm seeing Splunk timestamping issues from the FortiGate (FGT) logs it forwards to Splunk. To reiterate, FGT logs are sent to FAZ, then FAZ forwards those logs (via syslog) to Splunk. According to the FortiGate TA, this is supported, and it had worked before upgrading FAZ. What I'm seeing is all logs writing to a specific timestamp (in my case, 7:00 AM). Splunk does not seem to be extracting the timestamp field correctly. The TA's settings for timestamps are pretty basic: [fgt_traffic] TIME_PREFIX = ^ Everything else is default. Here is a sample event, that is getting written to 7:00 AM:   <189>logver=506051600 timestamp=1598992014 tz="UTC-7:00" devname="<redacted>" devid="<redacted>" vd="<redacted>" date=2020-09-01 time=13:26:55 logid="0000000013" type="traffic" subtype="forward" level="notice" eventtime=1598992015 srcip=<redacted> srcport=<redacted> srcintf="<redacted>" srcintfrole="wan" dstip=<redacted> dstport=<redacted> dstintf="<redacted>" dstintfrole="lan" poluuid="<redacted>" sessionid=2089596897 proto=6 action="timeout" policyid=1 policytype="policy" service="<redacted>" dstcountry="United States" srccountry="Netherlands" trandisp="noop" duration=10 sentbyte=40 rcvdbyte=0 sentpkt=1 rcvdpkt=0 appcat="unscanned" crscore=5 craction=262144 crlevel="low"   I tried changing the TIME_PREFIX to "timestamp=" and the TIME_FORMAT to "%s". No luck. Any ideas?
Hi All, I have a field which has improper format. I want to convert into a new field with proper format. field name: Create Value :  20190802212241Z   What I am looking is as below New Field na... See more...
Hi All, I have a field which has improper format. I want to convert into a new field with proper format. field name: Create Value :  20190802212241Z   What I am looking is as below New Field name : NewField Value : 2019/08/02 21:22:41  Thanks in advance!
Hello , I have a question regarding migration. Currently we are using 4 search heads tith a search head deployer which we used to deploy any apps to the search head cluster. Our deployer is on CentO... See more...
Hello , I have a question regarding migration. Currently we are using 4 search heads tith a search head deployer which we used to deploy any apps to the search head cluster. Our deployer is on CentOS 7 vm. We have another VM is ready which is RHEL 7 and have different IP address and DNS. I have to move the splunk deployer from CentOS 7 to new RHEL 7 server. My plan is to break the current search head cluster. And reconfigure the search head cluster again which will point to the new deployer. We have some apps in the old deployer shcluster/apps location , if i copy that entire directory or the entire splunk directory and paste it to the new VM and start the splunk in it then rebuild the SHC which will point to this new vm  ? Will their will be any problem for doing so ? Will the existing apps on the SHC will be over written ?? If their is an issue for doing so please let me know the best way to acheive this ??
Hi , Can anyone provide me approach/steps for integrating threat intelligence framework to Splunk ES. Also , how to pull active thread feed, export offensive IP list to CSV and get hash file list f... See more...
Hi , Can anyone provide me approach/steps for integrating threat intelligence framework to Splunk ES. Also , how to pull active thread feed, export offensive IP list to CSV and get hash file list from API through endpoint URL(i have that URL) using python script . I didn't understand clearly mentioned on Splunk doc so if anyone can put it together in simplified form.   Thanks 
Has anyone successfully parsed fields from the data gathered with Azure Log Analytics KQL Grabber? We are working on pulling Log Analytics logs from Azure using KQL Grabber which works great for doi... See more...
Has anyone successfully parsed fields from the data gathered with Azure Log Analytics KQL Grabber? We are working on pulling Log Analytics logs from Azure using KQL Grabber which works great for doing this. We are finding because KQL sends everything to the sourcetype KQL, we can't consistently parse fields out for our different inputs we have defined within KQL.
I updated the Palo TA first to 6.3.1 and then updated the Palo Alto Networks app to 6.3.1 and when I open the app, none of the panels populate. A basic search of Palo logs works fine and all key/val... See more...
I updated the Palo TA first to 6.3.1 and then updated the Palo Alto Networks app to 6.3.1 and when I open the app, none of the panels populate. A basic search of Palo logs works fine and all key/value pairs are there and haven't changed. In the app, when launching the search, a new window opens and all the searches work. It appears the issue is with the beginning of the searches in the dashboard - "| $autofocus_tags$" Any ideas how to fix?   Thx
Hello, Can you please help me with the question? Please give an explanation in a line or two. What is categorized as a dataset? A - Pivots B - Lookups C - Data Models D - Indexes
Hi Could you please help me figure out what is wrong with my regex. Splunk is returning a limite exceeds error while my regex is correct according to regex101. PI, my log looks like below : XXXX|Y... See more...
Hi Could you please help me figure out what is wrong with my regex. Splunk is returning a limite exceeds error while my regex is correct according to regex101. PI, my log looks like below : XXXX|YYYYY|ZZZZZ|UUUUUU my regex is: (?P<XXX>[\,+])\|(?P<YYY>[\,+])\|(?P<ZZZ>[\,+])\|(?P<UUU>[\,+])
When visualizing results on a map (presumably using geostats, but maybe there's something else?) is there a way to get a small line or area chart instead of a pie chart? Basically I have a bunch of ... See more...
When visualizing results on a map (presumably using geostats, but maybe there's something else?) is there a way to get a small line or area chart instead of a pie chart? Basically I have a bunch of requests over the past week or month then using iplocation to find where they are coming from.  I want to see if there are places with a significantly different number of requests (preferably by http_response_status) one day from the next. Even more basically, did we get 10K requests from (for example) St. Louis yesterday when over the past week we usually get 500?  Did we get 4K 401's from Buffalo when usually they're 3K 200's.   I've tried combining iplocation, geostats, timechart, "bin + chart by _time", etc. in various ways and can't get the data to make sense.  Much less to visualize it like I want. | iplocation ip | bin _time span=1d | stats count by _time, lat, lon, http_response_status   Maybe this is two questions.  1) How to format the data and 2) Can/how to visualize it.   Thank you!
Hi guys,  I'm trying to create a saved search (instead of  typing the same search command few times a day) , but there's a small "catch" in my search - I want to put multiple choice as one of the va... See more...
Hi guys,  I'm trying to create a saved search (instead of  typing the same search command few times a day) , but there's a small "catch" in my search - I want to put multiple choice as one of the variables.  e.g. Long search:  index=console1(sourcetype=c1:agent OR sourcetype="c1:agent_registered") computerName="computer1 OR computer2 OR computer25  | stats count by host   I created a basic saved seach: index=console1(sourcetype=c1:agent OR sourcetype="c1:agent_registered") $computerName$ | stats count by host  So my computerName can be different every time i need to check a new machine., but I can only one at a time... Is there a way to add that option to my saved search?
Hi I am trying to make a dashboard that searches events and extracts the correlationId from the event so I can display that information in a cleaner manner.  I just want to be able to extract the cor... See more...
Hi I am trying to make a dashboard that searches events and extracts the correlationId from the event so I can display that information in a cleaner manner.  I just want to be able to extract the correlationId using my search and it comes in two main patterns.  The first event pattern    and the second pattern   My current search is  My ultimate goal is to make a table with a Correlation ID column and other vital information columns   I have not edited the source code yet, so please feel free to leave any feedback or clarifying questions if needed
Hello i want to audit all activity in splunk (example : change settings( port udp/tcp configuration , reciving port configuration, ... ) , role modification, index creation, etc..)  
We've a requirement to add new fields. These fields are symbolic and they can considered like tags or categories. We're able to add these fields by updating default > data > manager > data_inputs_web... See more...
We've a requirement to add new fields. These fields are symbolic and they can considered like tags or categories. We're able to add these fields by updating default > data > manager > data_inputs_web_ping We need some information as to how to retain the values saved in these new fields and use these fields in Status Overview and other Searches.
Hi, I have a search that is returning values from certain fields of an index. I would like the search to use a lookup table and check if the values exist in the lookup table. If they do, I need them... See more...
Hi, I have a search that is returning values from certain fields of an index. I would like the search to use a lookup table and check if the values exist in the lookup table. If they do, I need them to be excluded from the search results. My search term is below and returns 3 fields of the index in question: index=duo | fields user location.country location.city | table user location.country location.city My lookup table is named locations.csv, which has 3 columns - user, country, city  So as an example, values for one row could be John France Paris . If the search returns a result where user=John, location.country=France and location.city=Paris, I want that to be excluded from the search results as it exists in the lookup file.  It is important that all 3 values must exist on a row in the lookup file csv, for it to be excluded in the search results. Can someone please help me on this? Thanks!
Hi I have some events in splunk which are of this form- Location: some value(same value can be there in multiple events) Client: some value(same value can be there in multiple events) Transactio... See more...
Hi I have some events in splunk which are of this form- Location: some value(same value can be there in multiple events) Client: some value(same value can be there in multiple events) TransactionNumber: some value(Unique for each event) Transaction Time: some value(Unique for each event) Now I want a table in this form -   Basically each location can have multiple clients and each client can have different transactions. Transaction number and transaction time are unique and have one to one mapping. I am using this query in splunk- | stats list(TransactionNumber) list(TransactionTime) by Location Client What's happening is I am getting unique combination of location and client but what I want is unique clients to be listed against a particular Location. This is what i am getting-   How can the query be modified to achieve the same?  
Hi all, we are planning a Splunk Enterprise Deploy in Azure, but I am not able to find an updated documentation about that. I'd like to know if and how it's different compared to a standard on-pr... See more...
Hi all, we are planning a Splunk Enterprise Deploy in Azure, but I am not able to find an updated documentation about that. I'd like to know if and how it's different compared to a standard on-premise deployment and the VM type we need to choose. This is the only document I was able to find: https://www.splunk.com/pdfs/technical-briefs/deploying-splunk-enterprise-on-microsoft-azure.pdf Is it updated?   Thanks!     
I'm trying to move from using a transaction command to a streamstats - I get most of the way there but I can't figure out the 'reset' with streamstats to proper group starts and ends together. Here's... See more...
I'm trying to move from using a transaction command to a streamstats - I get most of the way there but I can't figure out the 'reset' with streamstats to proper group starts and ends together. Here's my current query which isn't finding the right end time since my BY query will match multiple places and the latest(_time) will change to the wrong latest time but yet the actual latest time ...       index="logfiles" appType=reports* "Generating * Status Report" | rex \.(?<reportName>Generate\w*) | eval reportName=replace (reportName,"Generate","") | eval reportName=replace (reportName,"Report","") | streamstats earliest(_time) as stime by reportName appType | join appType,reportName [search index="logfiles" appType=reports* "Report generated successfully" | rex \.(?<reportName>\w+)ReportGenerator | streamstats reset_on_change=true reset_after="("searchmatch(\"Report generated successfully\")")" latest(_time) as etime BY reportName appType] | eval diff=etime-stime | eval hhmmss=tostring(diff, "duration") | convert timeformat=" %a %b %d %I:%M:%S.%3N %p %Z" ctime(stime) as StartTime | table StartTime appType reportName hhmmss |rename hhmmss as RunDuration      
Hi, I run two splunk search and results not come same. In the first search is with tstats ; timeprefix = yesterday | tstats `summariesonly` count from datamodel=Authentication.Authentication wher... See more...
Hi, I run two splunk search and results not come same. In the first search is with tstats ; timeprefix = yesterday | tstats `summariesonly` count from datamodel=Authentication.Authentication where index=wineventlog Authentication.user=some_user result is = 8990 In the second search ; index=wineventlog  user=some_user tag=authentication NOT (action=success user=*$)| stats count result is = 9000   Why datamodel and normal splunk search result is different ?  Also ; Datamodel accelaration status is %99 ?  Could the problem be caused by this?   Thank you.
I don't have much experience with Splunk but am starting to use it in a new role and have done a lot of research before asking this question. There are two parts and I cannot provide screenshots. I'... See more...
I don't have much experience with Splunk but am starting to use it in a new role and have done a lot of research before asking this question. There are two parts and I cannot provide screenshots. I'm running Splunk Enterprise with 3 workstations and 1 DC forwarding to the backup DC which holds the Splunk Server. We recently did a hardware update and began exceeding our license by 3-4x per day. The configuration didn't change and I cannot find what is causing this. I blacklisted the 10 event codes that were generating 80% of the logs and while they are no longer showing in my search, the server appears to continue to index them and by 8am today my index capacity was at 17500MB/5000MB for the day. I've also noticed anywhere from 50-1500 event logs for a single "Record Number." It's my understanding that a record number is unique to a single event and this means one event is getting logged several times. The time stamp is the same down to the millisecond. This I would argue is the bigger issue. WinEventLog://Security disabled = 0 start_from = newest blacklist = 4648,4701,.... <-- ... is not literal, just have 8 more