All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I am forwarding sysmon logs to splunk, for normalization, I could see event ID : 12, 13, 14 are captured (Registry object added or deleted, Registry value added, Registry value modified) All ar... See more...
Hi, I am forwarding sysmon logs to splunk, for normalization, I could see event ID : 12, 13, 14 are captured (Registry object added or deleted, Registry value added, Registry value modified) All are success events, will there be any failure events under the above mentioned eventIDs?
Good morning,    I am trying to group the count by percentile however all is showing in 0% which is in correct:  source="C:\\inetpub\\logs\\LogFiles\\*" host="WIN-699VGN4SK4U" index="main" |bucket... See more...
Good morning,    I am trying to group the count by percentile however all is showing in 0% which is in correct:  source="C:\\inetpub\\logs\\LogFiles\\*" host="WIN-699VGN4SK4U" index="main" |bucket span=1d _time| eventstats p75(count) as p75 p95(count) as p95 p99(count) as p99 | eval Percentile = case(count >= p75, "75%", count >= p95, "95%", count >= p99, "99%", 1=1, "0%") | stats count by Percentile Not really sure how to fix, any help would be greatly appreciated.   Thanks   Joe
hi phantom team, I have a simple use case to rename a filename in vault. As its immutable, I copied the contents to vault temp dir and renamed it there. And before adding the renamed file into vaul... See more...
hi phantom team, I have a simple use case to rename a filename in vault. As its immutable, I copied the contents to vault temp dir and renamed it there. And before adding the renamed file into vault, I did vault delete for existing vault id. Still I get aka : [ old name , new name] in vault info for new file added to vault. And strange thing is it gets the same vault id. Thanks, Sunil
Hey Everyone,  I am trying to search for a field to see how much a customer is spending but there is a letter in front of it. e.g. "cost" : "C1000" showing they spent $1000.  So for example I want... See more...
Hey Everyone,  I am trying to search for a field to see how much a customer is spending but there is a letter in front of it. e.g. "cost" : "C1000" showing they spent $1000.  So for example I want to search when the user spends between C1000 and C20000.  is there a way to remove the C and search the numbers of the result? this is what I have so far index="silverprod" source=*finance* ("Lambda" "Payload") NOT (lambda-warmer) *topup*  
Hi, I have below sources, source =  C:\Stats\user1\Tmpdata\Mappers\Consolesx\start.log source =  C:\Stats\user2\Tmpdata\Mappers\Consolesx\start.log source = C:\Stats\user3\Tmpdata\Mappers\Console... See more...
Hi, I have below sources, source =  C:\Stats\user1\Tmpdata\Mappers\Consolesx\start.log source =  C:\Stats\user2\Tmpdata\Mappers\Consolesx\start.log source = C:\Stats\user3\Tmpdata\Mappers\Consolesx\start.log source = C:\Stats\user4\Tmpdata\Mappers\Consolesx\start.log   Instead of displaying full paths i want the source to display just, can we have a rex for this one source = user1 source = user2 source = user3 source = user4
The env is a search head cluster with 3 search heads.  Whenever I need to add a new transforms-extract, or a new props-extract, I need to modify the file /opt/splunk/etc/apps/search/local/props.conf,... See more...
The env is a search head cluster with 3 search heads.  Whenever I need to add a new transforms-extract, or a new props-extract, I need to modify the file /opt/splunk/etc/apps/search/local/props.conf, copy it over to all search heads, and then do a rolling restart. The copy part isn't a problem (just run a script), but the rolling restart is disruptive to the production environment and every time it causes a long wait. Is there a smoother way to  modify props.conf and transforms.conf and replicate their contents in a search head cluster environment?
I have two data sources (Syslog and Netflow) which I am collecting on a dedicated host, where I have installed a Universal Forwarder. It is acting as an intermediate forwarder.  I have to route this... See more...
I have two data sources (Syslog and Netflow) which I am collecting on a dedicated host, where I have installed a Universal Forwarder. It is acting as an intermediate forwarder.  I have to route this data to Indexers of two different organisations on their respective indexes. E.g OrgA Syslog needs to go to index=syslog_A Netflow needs to go to index=netflow_A Indexer is IndexerA:9997 OrgB Same Syslog as above needs to go to index=syslog_B Same Netflow as above needs to go to index=netflow_B Indexer is IndexerB:9997 MyOrg Only Splunk internal logs to IndexerMyOrg Because this routing is based on metadata, I believe, I should be able to achieve this using universal forwarder. Can someone please advise how I can achieve this ?  
Hello guys, Do you know if upgrading version automatically renews default certificates like valid or expired server.pem? If yes for how long? I know renaming server.pem renews for 3 years. Splunk ... See more...
Hello guys, Do you know if upgrading version automatically renews default certificates like valid or expired server.pem? If yes for how long? I know renaming server.pem renews for 3 years. Splunk 7.3.4 Common CA is valid until 2027. Thanks
To the WebTools dev @jkat54 , would it be possible to upload and use your webtools add-on in Splunk Cloud ITSI?
Good afternoon,  I can't make sense of why I can't extract a definition from a particular csv.  I doublechecked permissions and verified that all of my columns are appearing via | inputlookup file.... See more...
Good afternoon,  I can't make sense of why I can't extract a definition from a particular csv.  I doublechecked permissions and verified that all of my columns are appearing via | inputlookup file.csv | table loopback, device the output recognizes both the custom device data as well as loopback but if I attempt to table the info "device" is not recognized.  index=index "syslog message" | rex field=_raw "peer (?<neighbor>\d+.\d+.\d+.\d+.)" | dedup neighbor | lookup xo-access-loopback loopback as neighbor output device | table device, neighbor I get neighbor output but not device.  csv looks like -  Any ideas? device loopback routername x.x.x.x
Hi everyone, I am currently having this issue where when I export my dashboard as a PDF, some of the Pie charts aren't showing up but their labels are. I initially thought that converting that pane... See more...
Hi everyone, I am currently having this issue where when I export my dashboard as a PDF, some of the Pie charts aren't showing up but their labels are. I initially thought that converting that panel to a report would solve this thinking it was an issue with data loading but that didn't seem to work. Any suggestions?
Hello everyone.  I have a dashboard with embedded queries that has stopped working since my update from 8.1 to 8.2.1. The query is as follows: host=sftpserver* source=/var/log/messages close bytes... See more...
Hello everyone.  I have a dashboard with embedded queries that has stopped working since my update from 8.1 to 8.2.1. The query is as follows: host=sftpserver* source=/var/log/messages close bytes read written | rex "close \"(?<filename>.[^\"]+)" |rex "written (?<writebytes>\w+)" | rex "read (?<readbytes>\w+)" | eval rfile=if(readbytes == 0, null(), filename) | eval wfile=if(writebytes == 0, null(), filename) |eval writemb=(writebytes/1024/1024) | eval readmb=(readbytes/1024/1024) | eval readmb=round(readmb,2) | eval writemb=round(writemb,2) |eval datetimestr=strftime(_time, "%Y-%m-%d %H:%M")| chart count(rfile) The query just gives a count of the number of files download from my SFTP server. Since the update, the dashboard that includes this query (and several others) now only shows data from before the time when Splunk was updated. Copying the query from the dashboard into a search, I can get the query to work in verbose mode but neither Fast nor Smart mode will return any results.
Hi Everyone,  Please, What is the search query to find: 1- The current health status of URL check for API services if (regex extracted field=200) achieved, means successful, otherwise all other ser... See more...
Hi Everyone,  Please, What is the search query to find: 1- The current health status of URL check for API services if (regex extracted field=200) achieved, means successful, otherwise all other services are failed? 2- At any selected time, the API availability percent time that each service is 200 (success)?  Thank you   
Hello! I don't normally load data into Splunk as I am primarily a front end user. However, I would like to load some of the attack datasets that Splunk has provided in Github.  attack_data/datasets/... See more...
Hello! I don't normally load data into Splunk as I am primarily a front end user. However, I would like to load some of the attack datasets that Splunk has provided in Github.  attack_data/datasets/attack_techniques at master · splunk/attack_data · GitHub Does anyone have config files for loading these windows log files posted here? My admin says they are a flat file and we do not currently have configuration files for ingesting them. Thank you so much, Cindy 
Disclaimer: This is an issue with VMware and not Splunk. But looking to see if others in the community have seen the same issue. Our infrastructure team has recently upgraded VMware to v7. We immedi... See more...
Disclaimer: This is an issue with VMware and not Splunk. But looking to see if others in the community have seen the same issue. Our infrastructure team has recently upgraded VMware to v7. We immediately started receiving an additional terabyte of logs from their hosts. They have a support case open with the vendor to figure out why, and the vendor agrees that this is not normal and we shouldn't have seen this volume spike. They're leaning more towards a misconfiguration than a bug. They're also wondering if others in the community have seen the same from VMware v7, and if so how was it handled. Has anyone who is using v7 of VMware seen the logging spike? If so, how did you resolve it? 
Hello, Can anyone kindly assist me with this item? I have multiple webservers and not all of them are forwarding the IIS logs into splunk. I have configured my input.conf  to: [monitor://G:\wwwlog... See more...
Hello, Can anyone kindly assist me with this item? I have multiple webservers and not all of them are forwarding the IIS logs into splunk. I have configured my input.conf  to: [monitor://G:\wwwlogs] disabled = 0 sourcetype = ms:iis:auto index=iislogs initCrcLength=2048 alwaysOpenFile = 1 I have attempted many different settings and scenarios. Any help is much appreciated. Thanks in advance  
We've been using the Splunk Add-on for F5 BIG-IP to forward F5 ASM events to our syslog cluster (two Linux servers) that have a Splunk UF installed and monitoring the F5 directory. This configuration... See more...
We've been using the Splunk Add-on for F5 BIG-IP to forward F5 ASM events to our syslog cluster (two Linux servers) that have a Splunk UF installed and monitoring the F5 directory. This configuration has been running for two years with no problems as Splunk ingests the ASM F5 logs/events. Today in the F5 GUI, I enabled the forwarding of bot defense logs/events to the syslog cluster using the same port and when I check the log file on the syslog cluster, I see the raw bot defense events, but when I check Splunk, I am not seeing the events at all. Any ideas as to why these raw events aren't being picked up? Thx
I tried to search this and didnt find anything so apologies if its been already answered.  If we run that URL one will get a page with a refresh button.  We hit the button and what's returned is a bu... See more...
I tried to search this and didnt find anything so apologies if its been already answered.  If we run that URL one will get a page with a refresh button.  We hit the button and what's returned is a bunch of HTML code.  Has anyone experienced this and how was this resolved? Running Splunk 8.2
Hey there, I just started with splunk. Currently I'm testing the new dashboard studio feature. I would like to count all searches with 1 or more found events on my dashboard. In case of the attach... See more...
Hey there, I just started with splunk. Currently I'm testing the new dashboard studio feature. I would like to count all searches with 1 or more found events on my dashboard. In case of the attached picture, I would like to display 3 in the upper SingleValue field. If the search for Event 2 has 0 results, i would like to display a 2, and so on. Is there a simple and scalable solution to this, if I want to add more searches to the dashboard later on? Thank you in advance!
Is it possible for individual Indexers be restarted from the MC? Please show the steps. Thanks very much.