All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

We have clustered indexing as part of our Splunk architecture deployed on Oracle Cloud and trying to come up with a reliable backup and restore strategy. OCI has buckets for object storage. Would it... See more...
We have clustered indexing as part of our Splunk architecture deployed on Oracle Cloud and trying to come up with a reliable backup and restore strategy. OCI has buckets for object storage. Would it make sense to have backups from warm, cold, and archived/frozen indexed data on the Splunk servers taken daily and then sent over to an OCI bucket? Splunk deployments on OCI aren't common, so I'm interested in hearing what people have done when it comes to Splunk deployments in AWS. I believe that S3 buckets have been used for backup storage and then the data is restored from there straight to the Splunk servers.
I'm trying to do some lookup table rationalization because we have some sources changing that we're pulling into lookup tables and I'll need to find new sources for some of my data types.  I'm trying... See more...
I'm trying to do some lookup table rationalization because we have some sources changing that we're pulling into lookup tables and I'll need to find new sources for some of my data types.  I'm trying to find a better way to get stats on fields used for lookup and inputlookup matches as well as output results, so I have a better way to weight the criticality of certain data sources in my org and push for better data coverage at the source for important fields.    The way I've done this so far is through | rest for saved searches and macros followed by an unholy amount of regexes to capture all of the worry-free use of cases and conditions and in-line renames using AS and WHERE.   I haven't even started with views... There simply must be a better way.   Is there anything in splunk_introspection that would basically count the equivalent of sum(count) of a particular lookup field by any saved search or macro?
This search gives me all the data I need, but I would to display only email accounts that end with an specific name for instance I want to display only accounts that end with @d.com  Here is my quer... See more...
This search gives me all the data I need, but I would to display only email accounts that end with an specific name for instance I want to display only accounts that end with @d.com  Here is my query: Here are my results: The only accounts that I want to display are the ones end with @d.com and as you can see, the search is showing other accounts like this one @ge.com can someone tell me how to do this? thanks
We see lots of alerts right now.  So I thought I would develop a dashboard that quickly searches through the alert configurations themselves, see if I can spot any trends. While I'm at it, find data ... See more...
We see lots of alerts right now.  So I thought I would develop a dashboard that quickly searches through the alert configurations themselves, see if I can spot any trends. While I'm at it, find data on when they were fired. I read that alert configurations end up on savedsearches.conf, but how do I search that?  Is this even possible? I have a feeling it involves a REST command, but the ones I'm writing return other data than I want.  Or else I'm searching the _internal index. Thanks!
Let's say we have the following log events: time1 text=g  count=82 time2 text=f  count=80 time3 text=c  count=14 time4 text=e  count=13 time5 text=b  count=11 time6 text=a  count=10 time7 te... See more...
Let's say we have the following log events: time1 text=g  count=82 time2 text=f  count=80 time3 text=c  count=14 time4 text=e  count=13 time5 text=b  count=11 time6 text=a  count=10 time7 text=d  count=6 The following query will get the Top N results: earliest=time1 latest=time7 index=blabla | stat sum(count) as count by text Result: text | count g   82 f   80 c  14 e  13 b  11 a  10 d  6 I need a query to get the Top 3 plus others result example: text | count g   82 f   80 c  14 others 40
Hi,   In general , I want to understand pros and cons of scheduling reports on Indexers ? If the reports are on Indexers , wont tht data be pulled on by Search heard ? I have a summary index repor... See more...
Hi,   In general , I want to understand pros and cons of scheduling reports on Indexers ? If the reports are on Indexers , wont tht data be pulled on by Search heard ? I have a summary index report which runs on search head and forwards data to Indexer, I want to cut over head on SH and schedule this report on Indexer and let the filtered summary index be picked by Search head . I it possible?
Q: Need to forward the data from all the indexes (Windows, Linux, etc...) to CyberArk PTA via Syslog or any other from the Splunk Indexer as we don't have HF in our Environment. I have followed the ... See more...
Q: Need to forward the data from all the indexes (Windows, Linux, etc...) to CyberArk PTA via Syslog or any other from the Splunk Indexer as we don't have HF in our Environment. I have followed the documentation given by CyberArk on PTA Splunk Integration, but it is not working (logs are not forwarding to PTA server) for me.  Link: https://docs.cyberark.com/Product-Doc/OnlineHelp/PAS/11.2/en/Content/PTA/Configuring-Splunk-Forward-syslog-messages.htm Configuration on Indexer: In the SPLUNK_HOME/etc/system/local  -->outputs.conf [syslog:pta_syslog] server = <PTA Server IP>:<port> indexAndForward=true type=tcp timestampformat = %s syslogSourceType=sourcetype:: linux:messages ---->props.conf [source::WinEventLog:Security] TRANSFORMS-pta = pta_syslog_filter ----->transforms.conf [pta_syslog_filter] REGEX = .*EventCode=4624|4720|4723|4724|4732.* DEST_KEY = _SYSLOG_ROUTING FORMAT = pta_syslog  
I am trying to tune an alert but need to only exclude if 2 of three fields do not contain a string.  My goal is too tune out improbable access alerts where certain users log in from two locations wit... See more...
I am trying to tune an alert but need to only exclude if 2 of three fields do not contain a string.  My goal is too tune out improbable access alerts where certain users log in from two locations within the united stats.   The search results are below   The SPL without the exclusion is below `m365_default_index` sourcetype="o365:management:activity" Operation=UserLoggedIn | rename ClientIP AS src_ip | sort 0 UserId, _time | streamstats window=1 current=f values(_time) as last_time values(src_ip) as last_src_ip by UserId | iplocation last_src_ip | rename Region as State | eval last_location = if(isnotnull(City) AND City!="", City . ", ", "") . if(isnotnull(State) AND State!="", State . ", ", "") . if(isnotnull(Country) AND Country!="", Country . ", ", "") | rename lat as last_lat lon as last_lon Country as last_Country | iplocation src_ip | rename Region as State | eval location = if(isnotnull(City) AND City!="", City . ", ", "") . if(isnotnull(State) AND State!="", State . ", ", "") . if(isnotnull(Country) AND Country!="", Country . ", ", "") | foreach *location [ | eval <<FIELD>> = replace(replace(<<FIELD>>, "^\s*,\s*", ""), "\s*,\s*$$", "")] | eval rlat1 = pi()*last_lat/180, rlat2=pi()*lat/180, rlat = pi()*(lat-last_lat)/180, rlon= pi()*(lon-last_lon)/180 | eval a = sin(rlat/2) * sin(rlat/2) + cos(rlat1) * cos(rlat2) * sin(rlon/2) * sin(rlon/2) | eval c = 2 * atan2(sqrt(a), sqrt(1-a)) | eval distance = 6371 * c, time_difference_hours = round((_time - last_time) / 3600,2), speed=round(distance/ ( time_difference_hours),2) | fields - rlat* a c | eval day=strftime(_time, "%m/%d/%Y") | search last_Country!=Country distance!=0 speed>1000 | stats values(time_difference_hours) as time_difference_hours values(speed) as speed first(last_location) as location_one first(location) as location_two values(*src_ip) as *src_ip min(_time) as firstTime by UserId distance day | eval firstTime=strftime(firstTime, "%m/%d/%Y %H:%M:%S") | sort - distance | search NOT UserId="Unknown" | search distance>500 AND speed>500 | lookup Executives.csv Email as UserId OUTPUTNEW Title | lookup CriticalUsers.csv Email as UserId OUTPUTNEW Title | eval Severity=case(isnotnull(Title),"Critical",isnull(Title),"Medium") I have tried just about every combination of search NOT, search fielda!=blah, etc.  Does anyone know how to do this?  An example of what I have tried is below | search NOT UserId="someuser" AND NOT location_one="*United States" AND NOT location_two="*United States"
I am having data like this in my Splunk and I wanted to extract the value of status which is Active. How can I do it when this is not a valid JSON string?     mydata { name { value: "1111" } id {... See more...
I am having data like this in my Splunk and I wanted to extract the value of status which is Active. How can I do it when this is not a valid JSON string?     mydata { name { value: "1111" } id { value: "2020-07-02 15:49:00" } status { value: "Active" } } Any help is appreciated.    
Hi, We are planning to ingest close to 100GB/day for the next 2 years. Eventually, we are estimating that the ingestion count shall touch 300/GB day. The requirement is to have data available/onlin... See more...
Hi, We are planning to ingest close to 100GB/day for the next 2 years. Eventually, we are estimating that the ingestion count shall touch 300/GB day. The requirement is to have data available/online for 90 days (hot/warm/cold) while all data older than 90 days shall be frozen and archived for 10 years. Yes, that is a major storage requirement, but given that our Splunk components shall be setup in the AWS Cloud environment, we do have the ability to scale up on storage over time.  We have 4 indexers (installed on RHEL Linux instances) in a cluster with specs of 16 CPU and 64GB RAM each. The indexers are expected to achieve at least 800 IOPS at full capacity, but not much more, at least for now since we are only planning to ingest 100 GB/day. The plan is to store the hot, warm, and cold buckets on a single volume while the frozen/archived data shall be restored on a second volume.  Based on these details, which RAID level and count of disks per volume would be recommended for this setup?   
Hello, While this helpful Splunk document ( https://docs.splunk.com/Documentation/Splunk/8.0.4/Deploy/Manageyourdeployment ) provides some insight on which Splunk components a Deployer can be coloca... See more...
Hello, While this helpful Splunk document ( https://docs.splunk.com/Documentation/Splunk/8.0.4/Deploy/Manageyourdeployment ) provides some insight on which Splunk components a Deployer can be colocated with, I'm looking for advice for my specific situation, where we are anticipating ingestion of less than 200 GB/day . We are planning to have 2 standalone Enterprise Security Search Heads and 3 Enterprise Search Heads in a cluster. Each SH will run on instances with 16CPU and 64GB RAM. We are planning to colocate the Cluster Master and License Master (8 CPU, 64GB RAM), as well as Deployment Server with the Monitoring Console (12 CPU, 64 GB RAM). Would it be feasible to colocate the Deployer with the DS + MC or the CM + LM? Or would you recommend that the Deployer be installed on a standalone instance?
We are using ingest pattern as API at Heavy forwarder. props.conf:- [kenna:applications] INDEXED_EXTRACTIONS = json TZ = UTC LINE_BREAKER = "\}\,\{\"id\"\: TRUNCATE = 10485760 SHOULD_LINEMERGE ... See more...
We are using ingest pattern as API at Heavy forwarder. props.conf:- [kenna:applications] INDEXED_EXTRACTIONS = json TZ = UTC LINE_BREAKER = "\}\,\{\"id\"\: TRUNCATE = 10485760 SHOULD_LINEMERGE = false This line breaker did not work Sample Log:- {"applications":[{"id":3964,"name":"xyz.com","repo_url":null,"host_name":null,"owner":null,"team_name":null,"business_units":null,"notes":null,"risk_meter_score":0,"vulnerability_count":0,"asset_count":0,"total_vulnerability_count":0,"open_vulnerability_count_by_risk_level":{"high":0,"medium":0,"low":0,"total":0},"historical_risk_meter_scores":[{"date":"2020-04-07","score":0},{"date":"2020-04-08","score":0},{"date":"2020-04-09","score":0},{"date":"2020-04-10","score":0},{"date":"2020-04-11","score":0},{"date":"2020-04-12","score":0},{"date":"2020-04-13","score":0},{"date":"2020-04-14","score":0},{"date":"2020-04-15","score":0},{"date":"2020-04-16","score":0},{"date":"2020-04-17","score":0},{"date":"2020-04-18","score":0},{"date":"2020-04-19","score":0},{"date":"2020-04-20","score":0},{"date":"2020-04-21","score":0},{"date":"2020-04-22","score":0},{"date":"2020-04-23","score":0},{"date":"2020-04-24","score":0},{"date":"2020-04-25","score":0},{"date":"2020-04-26","score":0},{"date":"2020-04-27","score":0},{"date":"2020-04-28","score":0},{"date":"2020-04-29","score":0},{"date":"2020-04-30","score":0},{"date":"2020-05-01","score":0},{"date":"2020-05-02","score":0},{"date":"2020-05-03","score":0},{"date":"2020-05-04","score":0},{"date":"2020-05-05","score":0},{"date":"2020-05-06","score":0},{"date":"2020-05-07","score":0},{"date":"2020-05-08","score":0},{"date":"2020-05-09","score":0},{"date":"2020-05-10","score":0},{"date":"2020-05-11","score":0},{"date":"2020-05-12","score":0},{"date":"2020-05-13","score":0},{"date":"2020-05-14","score":0},{"date":"2020-05-15","score":0},{"date":"2020-05-16","score":0},{"date":"2020-05-17","score":0},{"date":"2020-05-18","score":0},{"date":"2020-05-19","score":0},{"date":"2020-05-20","score":0},{"date":"2020-05-21","score":0},{"date":"2020-05-22","score":0},{"date":"2020-05-23","score":0},{"date":"2020-05-24","score":0},{"date":"2020-05-25","score":0},{"date":"2020-05-26","score":0},{"date":"2020-05-27","score":0},{"date":"2020-05-28","score":0},{"date":"2020-05-29","score":0},{"date":"2020-05-30","score":0},{"date":"2020-05-31","score":0},{"date":"2020-06-01","score":0},{"date":"2020-06-02","score":0},{"date":"2020-06-03","score":0},{"date":"2020-06-04","score":0},{"date":"2020-06-05","score":0},{"date":"2020-06-06","score":0},{"date":"2020-06-07","score":0},{"date":"2020-06-08","score":0},{"date":"2020-06-09","score":0},{"date":"2020-06-10","score":0},{"date":"2020-06-11","score":0},{"date":"2020-06-12","score":0},{"date":"2020-06-13","score":0},{"date":"2020-06-14","score":0},{"date":"2020-06-15","score":0},{"date":"2020-06-16","score":0},{"date":"2020-06-17","score":0},{"date":"2020-06-18","score":0},{"date":"2020-06-19","score":0},{"date":"2020-06-20","score":0},{"date":"2020-06-21","score":0},{"date":"2020-06-22","score":0},{"date":"2020-06-23","score":0},{"date":"2020-06-24","score":0},{"date":"2020-06-25","score":0},{"date":"2020-06-26","score":0},{"date":"2020-06-27","score":0},{"date":"2020-06-28","score":0},{"date":"2020-06-29","score":0},{"date":"2020-06-30","score":0},{"date":"2020-07-01","score":0},{"date":"2020-07-02","score":0},{"date":"2020-07-03","score":0},{"date":"2020-07-04","score":0},{"date":"2020-07-05","score":0},{"date":"2020-07-06","score":0}],"external_facing":true,"priority":10,"identifiers":["xyz.com"]},{"id":3965,"name":"xyz1.com/ecmlogin- DEV","repo_url":null,"host_name":null,"owner":null,"team_name":null,"business_units":null,"notes":null,"risk_meter_score":0,"vulnerability_count":0,"asset_count":0,"total_vulnerability_count":0,"open_vulnerability_count_by_risk_level":{"high":0,"medium":0,"low":0,"total":0},"historical_risk_meter_scores":[{"date":"2020-04-07","score":0},{"date":"2020-04-08","score":0},{"date":"2020-04-09","score":0},{"date":"2020-04-10","score":0},{..........  
I'm trying to create an empty panel for a custom title, but I would need to change the background, so the color wouldn't be the same as the default background color. Would this be possible? I don't ... See more...
I'm trying to create an empty panel for a custom title, but I would need to change the background, so the color wouldn't be the same as the default background color. Would this be possible? I don't see any tag for HTML reference on the documentation. https://docs.splunk.com/Documentation/Splunk/8.0.4/Viz/PanelreferenceforSimplifiedXML#Shared_attributes I primarily want to change the background color of the row tag. Current code: (the background color below only change the text line, not the entire row.)      <row> <panel> <html> <H1 style="text-align:center;background-color:#485959;">Transfer Review</H1> </html> </panel> </row>    
My index time is 7/6/20 3:37:42.210 PM  My event time is 07/06/20 10:37:42.210 CDT My TIME_FORMAT=%x %H:%M:%S.%3N%Z But still, by referencing the above time, we can see the latency between inde... See more...
My index time is 7/6/20 3:37:42.210 PM  My event time is 07/06/20 10:37:42.210 CDT My TIME_FORMAT=%x %H:%M:%S.%3N%Z But still, by referencing the above time, we can see the latency between index time and event time. Please suggest how to resolve this.
Hello! I’m trying to replace product codes with product names like | replace “A1” with “Apple”, “A2” with “Grape”, “A3” with “ Watermelon” I’m getting what I want except when there are more than o... See more...
Hello! I’m trying to replace product codes with product names like | replace “A1” with “Apple”, “A2” with “Grape”, “A3” with “ Watermelon” I’m getting what I want except when there are more than one value in Product code field. Apple Grape A1 | A2 How can I fix the row with multiple values? Thank you.
Good morning! I noticed today that a couple of my devices stopped sending logs to Splunk a couple of hours ago. I want to develop a dashboard to show the timelines of stats count by host over the pas... See more...
Good morning! I noticed today that a couple of my devices stopped sending logs to Splunk a couple of hours ago. I want to develop a dashboard to show the timelines of stats count by host over the past 24 hours. So, something like this that shows each of my devices for the past 24 hours in one dashboard:     I just want to be able to scroll through and make sure that all devices have logged something every hour for the past 24 hours.   I am pretty new to Splunk so I dont have a great knowledge base to pull from. Thank you very much for your help!   -Josh
Hi, Is there a way to update KVStore without utilizing external JS files, and instead have it embedded into the client side dashboard?  We want to avoid an extra step of waiting for an admin to dep... See more...
Hi, Is there a way to update KVStore without utilizing external JS files, and instead have it embedded into the client side dashboard?  We want to avoid an extra step of waiting for an admin to deploy javascript into our prod environment.
Hello, I created a dashboard that works as a tool to filter a much larger report. For example, I'm using inputs/dropdown menus to filter by different columns. When I go to export the table that's... See more...
Hello, I created a dashboard that works as a tool to filter a much larger report. For example, I'm using inputs/dropdown menus to filter by different columns. When I go to export the table that's been filtered with the dashboard, the icon to export as CSV is disabled. I'm writing to ask if there's a way to work around this? I know that this question has been asked but most of the threads I see on this question are many years old which is why I wanted to ask again in case anything has changed. Thank you for your help. 
Hello, Trying to add several maps to a dashboard. One map for each continent, except North America. How do I lock a dashboard panel to only show that continent, and not zoom out to other continen... See more...
Hello, Trying to add several maps to a dashboard. One map for each continent, except North America. How do I lock a dashboard panel to only show that continent, and not zoom out to other continents? Here is my SPL. What would be the XML?   index IN (linuxevents) AND host IN (la1) AND source IN (/data/httpd_logs/ssl_access_log) AND method IN (GET,POST) AND cidrName!="waf*" | dedup jsessionid | iplocation client_IP | search NOT (Country IN ("United States","Canada","Puerto Rico")) | geostats latfield=lat longfield=lon count BY host | geom geo_countries featureIdField="Country"   Thanks and God bless, Genesius
Hi, I see error messages from the exec processor on my Splunk DB Connect installs where dbxquery complains about input that suspiciously looks like stuff from Nessus. So it seems that dbxquery is l... See more...
Hi, I see error messages from the exec processor on my Splunk DB Connect installs where dbxquery complains about input that suspiciously looks like stuff from Nessus. So it seems that dbxquery is listening on the Ethernet. How can I stop that? Me thinks it should listen only on localhost if it needs to listen at all.   thx afx