All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi  Currently, My scheduled alert runs every five minutes but I need to get it triggered when the event count goes more than 2 in a minute. What is the best way to handle it? 
Hi There,  I am probably making this more confusing for myself than it needs to be, but its a simple concept.  Here is the scenario. If an invite is emailed and no confirmation is received within 1 ... See more...
Hi There,  I am probably making this more confusing for myself than it needs to be, but its a simple concept.  Here is the scenario. If an invite is emailed and no confirmation is received within 1 day from email being sent then it is "In Progress" otherwise its a failure.  Please help formulate, basically if no confirmation is received within 1 day its in progress. I would like to keep my times all in epoch. Thank You in advance  | makeresults | eval email_sent=1637978619.056000 | eval time_passed_no_confirmation=86400 | eval confirmation_remains_null="null"
On various occasions I find myself writing formulas like (simplified version): eval cat=case(like(CC, "TenantA%"), "ABC", like(CC, "TenantB%"), "BBC", true(), "Enterprise") Or mapping the hosts to... See more...
On various occasions I find myself writing formulas like (simplified version): eval cat=case(like(CC, "TenantA%"), "ABC", like(CC, "TenantB%"), "BBC", true(), "Enterprise") Or mapping the hosts to regions eval site=case(like(host, "%-au%"), "AWS US", like(host, "%-ac%"), "AWS CA", like(host, "%-ae%"), "AWS EU", true(), "UnKnown")   Sometime I use the same mappings across many reports dashboards. Copy/paste does not cut it. Also sometimes they need to get updated. Any suggestion?
hi there! We have a daly push from Google over to our Splunk instance that provides directory information around total number of users, etc. I have a very simple query today that can parse out the ... See more...
hi there! We have a daly push from Google over to our Splunk instance that provides directory information around total number of users, etc. I have a very simple query today that can parse out the information I need into two values to cover the last directory push into Splunk: index="google" sourcetype="*directory*" "emails{}.address"="*@mydomain.com" | chart count by archived This results in two values being returned from the latest directory push. My question is this:  I would like to have this embedded in a dashboard so that we can show historical values of this data in a bar chart over time:  ie:  how is the directory growing / shrinking week-to -week or month-to-month. I am not sure if I should head down the timechart path, or use a different method to get this data based on the fact it is a periodical (every 24 hours) single entry pushed into the splunk server...thoughts on which path to start down would be super helpful....
I am attempting to use a search from IT Essentials Learn named "Alert when host stops reporting data - Linux - IT Essentials Work" Is it possible to filter this alert by host type? I've performed a ... See more...
I am attempting to use a search from IT Essentials Learn named "Alert when host stops reporting data - Linux - IT Essentials Work" Is it possible to filter this alert by host type? I've performed a number of tests now and it seems my only option is to search against all hosts. Here is the search from IT Essentials Learn       |tstats dc(host) as val max(_time) as _time where index="<INDEXES-TO-CHECK>" host="<HOSTS-TO-CHECK>" by host |append [|metadata type=hosts index="<INDEXES-TO-CHECK>" | table host lastTime | rename lastTime as _time | where _time>now()-(60*60*12) | eval val=0] |stats max(val) as val max(_time) as _time by host | where val=0 | rename val as "Has Data" | eval Missing Duration= tostring(now()-_time, "duration") | table host "Has Data" "Missing Duration"         I modified the 2 index lines and the host line. If I use * for all 3 it kind of works but checks against every host. If I use host=*dev* it displays all hosts without the name *dev* as evaluating to 0 whereas all the *dev* hosts get evaluated to 1.  To counteract this I tried adding a where host=*dev* elsewhere (I tried it in the metadata portion, as a where clause at the end of all the metadata piping, as a where clause next to where val=0, etc.) but this has the effect of just completely removing a host that isn't sending data from the list (or removing all hosts), so that also does not work. Is it possible to split this up based on hosts or am I stuck with an all or nothing?   Edit: I tried adding a where all the way at the end. It does not work with host="*dev*" However, I can use host!="some host name" to filter those out. I'm not sure why I can use negation but not wildcards?   Edit2: I am searching on the prior 5mins if that matters at all
Hello   I am running a * search in an app and it returns several columns in the csv extract where a column is named 'source'.   I want to return the distinct values of 'source' but neither of the... See more...
Hello   I am running a * search in an app and it returns several columns in the csv extract where a column is named 'source'.   I want to return the distinct values of 'source' but neither of the below work: | values(source) or | distinct source   Any idea?   thanks!
  I have 2 independent queries run on 2 different index that give me a list of requestIds. I want to filter/not include the requestIds of the second query in my search. I am trying to use the follow... See more...
  I have 2 independent queries run on 2 different index that give me a list of requestIds. I want to filter/not include the requestIds of the second query in my search. I am trying to use the following query to do so but its not filtering the results from second query. What am i doing wrong here    index="index1" <query1> | rename requestId AS Result | table Result | search NOT [search index="index2" <query2>| rename RequestId AS Result| table Result]
We have an application, that sends all its log-messages to Splunk (so far so good), and an alert configured to fire, whenever a message with severity above INFO-level is logged. This works Ok most o... See more...
We have an application, that sends all its log-messages to Splunk (so far so good), and an alert configured to fire, whenever a message with severity above INFO-level is logged. This works Ok most of the time, except when the application restarts there are multiple such warnings and errors logged by some of its threads. We don't care for these, because the main thread has already announced, that it is shutting down. How can I phrase the search underlying our alert to exclude any log-entries made after the "I am shutting down" and before the "I started up" ones? To clarify: we want Splunk to receive all the log-entries, we just don't want the alert to be triggered by those, that are emitted during the program restart...
I need help regarding a join from events based on two different indexes that are related by the same value in one specific field. Below a simple example: index=source1 | table device.hostname,devic... See more...
I need help regarding a join from events based on two different indexes that are related by the same value in one specific field. Below a simple example: index=source1 | table device.hostname,device.serialnumber Results: device.hostname device.serialnumber host1 ABC host2 DEF index=source2 | table hostname,user Results: hostname user host1 john host2 mary I would like to join these two searches in order to get the following results: device.hostname device.serialnumber user host1 ABC john host2 DEF mary Thank in advance for your help.
I believe there is a latent bug in the aws_config_cli.py script for the AWS Add-on. The function list is from the latest version (5.2.0)     def list(self): names = None if sel... See more...
I believe there is a latent bug in the aws_config_cli.py script for the AWS Add-on. The function list is from the latest version (5.2.0)     def list(self): names = None if self.params.names: names = self.params.names.split(',') results = self.config_mgr.list( self.endpoint, self.params.hostname, names) items = [] for result in results: item = copy.deepcopy(result['content']) item['name'] = result['name'] items.append(item) print json.dumps(items, indent=2)     Notice the print command.  In Python 3, print is a function and thus requires braces.  For Python3, I believe the code should actually be print (json.dumps(items, indent=2)) Additionally,in compose_cli_args for resource, desc in resources.iteritems(): should be for resource, desc in resources.items():    
Hi, I am trying to filter the events using LOGIN keyword and drop remaining events. I am trying with the below configuration and it is not working. Any suggestions please? props.conf [test_sourcet... See more...
Hi, I am trying to filter the events using LOGIN keyword and drop remaining events. I am trying with the below configuration and it is not working. Any suggestions please? props.conf [test_sourcetype] TRANSFORMS-sample = test_authlog,setnull_test   transforms.conf [test_authlog] REGEX = (LOGIN) DEST_KEY = queue FORMAT = indexQueue   [setnull_test] REGEX = (?!LOGIN) DEST_KEY = queue FORMAT = nullQueue
Hi SMEs,   I am trying to write regex to parse/map CEF format fields as below. so that all corresponding fieldname can capture values, i am not able to capture values having spaces in between. Seek... See more...
Hi SMEs,   I am trying to write regex to parse/map CEF format fields as below. so that all corresponding fieldname can capture values, i am not able to capture values having spaces in between. Seeking suggestion. Attached snap shot for ref. regex101 c[n|s]\dlabel\=(\w+).*?c[n|s]\d\=([\.a-zA-Z0-9_-]+)   CEF:0|vendor|product|1.1|1234|PolicyAssetUpdated|1|cn1label=EventUserId cn1=-3 cs1label=EventUserDisplayName cs1=Automated System cs2label=EventUserDomainName cs2= cn2label=AssetId cn2=20888 cs3label=AssetName cs3=ABCDPQRS.domain.com cn3label=DirectoryId cn3=856 cs4label=DirectoryName cs4=Active Directory cs5label=DomainName cs5=domain.com
Hello,  I am creating a query for my proxy data. The idea is to show all categories that I want in multiple single value charts. And for any categories that return 0, they will still be represented ... See more...
Hello,  I am creating a query for my proxy data. The idea is to show all categories that I want in multiple single value charts. And for any categories that return 0, they will still be represented by a 0. my current query is  index="siem-cyber-proxy" action=blocked category=gambling OR category=malware  | eval isEvent=if(searchmatch("category"),1,0) | stats count as myCount sum(isEvent) AS isEvent | eval result=if(isEvent>0, isEvent, myCount) | table result   This current query adds results from both categories together rather than split into individual charts. I need to find out how to split the results so it creates multiple charts. Or do i need to run the query for each individual category? Hopefully this makes sense. Thank you
Have upgraded a few 100 FWs (UF & HF) to Splunk 8.2.3. Looking for a bench marking checks to make sure they are fully functional. We have a large clustered (Indexers / SHs), with ES. Any SPLs for ben... See more...
Have upgraded a few 100 FWs (UF & HF) to Splunk 8.2.3. Looking for a bench marking checks to make sure they are fully functional. We have a large clustered (Indexers / SHs), with ES. Any SPLs for bench marking instances upgraded, on ES & FWs are appreciated. Thank u & stay safe. 
Hi I am getting  KV Store process terminated abnormally (exit code 14, status exited with code 14). See mongod.log and splunkd.log for details.   I have stopped splunk and moved mongod folder and... See more...
Hi I am getting  KV Store process terminated abnormally (exit code 14, status exited with code 14). See mongod.log and splunkd.log for details.   I have stopped splunk and moved mongod folder and started it again  I am getting now  2021-12-01T13:55:55.528Z W CONTROL [main] net.ssl.sslCipherConfig is deprecated. It will be removed in a future release. 2021-12-01T13:55:55.545Z F NETWORK [main] The provided SSL certificate is expired or not yet valid. 2021-12-01T13:55:55.545Z F - [main] Fatal Assertion 28652 at src/mongo/util/net/ssl_manager.cpp 1120 2021-12-01T13:55:55.545Z F - [main] ***aborting after fassert() failure and I want to regenerate server.pem   just to confirm this is the right command  $SPLUNK_HOME/bin/splunk createssl what are the risks   ?  
Dear Splunk Community, I have the following code: <dashboard> <label></label> <row> <panel> <single> <search> <query>host="DESKTOP-L4ID3T2" source="BatchProcessor*"... See more...
Dear Splunk Community, I have the following code: <dashboard> <label></label> <row> <panel> <single> <search> <query>host="DESKTOP-L4ID3T2" source="BatchProcessor*" inventoryimport* "ExitCode: 0" | stats count | eval msg=case(count == 0, "Scan niet succesvol!", count &gt; 0, "Scan succesvol!") | eval range=case(count == 0, "severe", count &gt; 0, "low") | table msg</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="field">range</option> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </single> </panel> </row> </dashboard>   If count returns 0 events, I expect the color of the single value field to be red (severe), otherwise it should be green (low). But using the above code, there is no color at all (besides the default color black and white).  Why is the above not working? Thanks in advance.
I'm new to this! Our custom app stopped working on Splunk Cloud and was told by support to change an xml file below from app.html to search.html because of a Cloud change. However, when I go to reupl... See more...
I'm new to this! Our custom app stopped working on Splunk Cloud and was told by support to change an xml file below from app.html to search.html because of a Cloud change. However, when I go to reupload the same app, I get the error that app validation failed: App does not support search head cluster environments. It passes all the vetting just gives that error at the end. What could it possibly be looking for? Again, new to this so can't find the answer in google..   <?xml version="1.0"?> <view template="pages/search.html" type="html" isDashboard="False"> <label>Search</label> </view>  
The certificate configuration tutorials have unfortunately left me with some lingering questions.  Premise: They have taught me that in order to set up a 3rd-party-signed certificate for a Splunk E... See more...
The certificate configuration tutorials have unfortunately left me with some lingering questions.  Premise: They have taught me that in order to set up a 3rd-party-signed certificate for a Splunk Enterprise server, I must: 1.create privatekey 2.create CSR, using the aforementioned private key 3.sent CSR to the CA authority of the current company 4.receive a multitude of certificates: a server cert, a CA root cert, and perhaps CA intermediate certs. 5.I can choose to combine the CAroot and CAintermediate certs to create a CAbundle.pem which i can reference to in any CAcert fields. (example: sslRootCaPath field in server.conf ) 6. I need to combine the server cert, private key, and CAbundle to create a complete Splunk Enterprise signed certificate. (to be used by fields like for example inputs.conf:serverCert, or outputs.conf:sslCertPath ) So far so good. This procedure allows me to set up SSL connections between Splunk Enterprise instances. I have two scenarios where this setup probably do not work, and I would like to know how I cán make them work:  1) I want to deploy 100 forwarders remotely and set them so that they send their data to an indexer or heavy forwarder through SSL. Problem: The process of getting a 3rd party signed certificate for each and every forwarder is arduous and I don't believe it can be done remotely effectively.  My thoughts: Can I use (part of) the certification of the data receiver (IDX/HF)  as a public key which I can then send to all forwarders? Clearly I can not use the concatenated certificate described in premise_step6, because it contains a private key.  Could I maybe use the signed servercert part that I received from the 3rd party, pre-concatenation ?  A splunk data receiver does not necessarily have to validate the certification of a date sender, so I don't see why each universal forwarder should be equiped with its own certificate. There has to be a way to have only them check whether the indexer has valid certification somehow. 2) Say I want to connect another application (like the Infoblox Splunk Connector) to a Splunk data receiver while using SSL. My thoughts: I expect that sending the CAbundle (premise_step5) should be enough, so that the application side can create its own certificate and perhaps combine it with the CAroot somehow.. but I guess my question is the same as before; I cannot send the concatenated .pem from premise_step6. What is the best way to set up an SSL connection to another application?  Thanks in advance.
I have 3 servers (2 of them have 4x600GB hdd and one has 6x600GBHDD and 2x800GB SSD). I want to build small splunk architecture 10GB/day with : - 3 Indexers in cluster - 1 deployment, licensing , ... See more...
I have 3 servers (2 of them have 4x600GB hdd and one has 6x600GBHDD and 2x800GB SSD). I want to build small splunk architecture 10GB/day with : - 3 Indexers in cluster - 1 deployment, licensing , master cluster indexer - 1 searchead What is my best option to build this architecture? I was thinking to make server cluster using Proxmox and then deploy each of those machines in virtual environment, but I do not have NAS as different device, and to get best availability from server clustering I need to make those VMs as light as possible.
Hello, I recently messed up the permissions for the only account in my testing environment instance. I no longer have access to search my existing indexes and I cannot seem to re-grant admin level pr... See more...
Hello, I recently messed up the permissions for the only account in my testing environment instance. I no longer have access to search my existing indexes and I cannot seem to re-grant admin level privileges to my account as I do not have the privileges to do so. I have tried to make another account but of course I am unable to give that account the permissions that I need. If there is anyway that I can restore my access please let me know.