All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi all, "Threat - Source And Destination Matches - Threat Gen" saved search in enterprise security ran with status=success but return 0 result. But if I run the search separately with the save time ... See more...
Hi all, "Threat - Source And Destination Matches - Threat Gen" saved search in enterprise security ran with status=success but return 0 result. But if I run the search separately with the save time range , I got the result. I have also checked the permission of all the macros and model used in the search, they all are global. so I'm not sure what's the issue here, please help me.   @kamlesh_vaghela @melissap @vikramyadav @HoardingIO 
Hello. It is not a question, it is a use case that I don't arrive to resolve. The situation : a log file on remote server, with a Splunk Universal Forwarder and only an inputs.conf (not other con... See more...
Hello. It is not a question, it is a use case that I don't arrive to resolve. The situation : a log file on remote server, with a Splunk Universal Forwarder and only an inputs.conf (not other conf) a props.conf on Heavy Forwarder with LINE_BREAKER = \d{1,4}-\d{1,2}-\d{1,2}\s\d{1,2}:\d{1,2}:\d{1,2}(,|.)\d{1,3}\s\[ The default format : 2020-12-07 08:02:24.350 [<thread>] <type> <bla bla bla ...> When the record is a Java Exception logging, there is no problem. The record is complete and contain all the stacktrace (SHOULD_LINEMERGE = TRUE) But I have other cases which group together a whole set of lines. And each time, the first line contains the word "started", and the last the word "ended". For example, this is for Splunk only one log. But I would like 4 !       2020-12-07 08:02:30,567 [http-nio-10.108.181.36-30000-exec-6] INFO p.a.f.s.w.FrontalAuthentRestController 869 - - b- health-check started. (FrontalAuthentRestController.java:37) 2020-12-07 08:02:30,583 [http-nio-10.108.181.36-30000-exec-6] INFO p.a.f.s.w.FrontalAuthentRestController 869 - - b- health-check ended. (FrontalAuthentRestController.java:44) 2020-12-07 08:02:34,670 [http-nio-10.108.181.36-30000-exec-9] INFO p.a.f.s.w.FrontalAuthentRestController 845 - - b- health-check started. (FrontalAuthentRestController.java:37) 2020-12-07 08:02:34,684 [http-nio-10.108.181.36-30000-exec-9] INFO p.a.f.s.w.FrontalAuthentRestController 845 - - b- health-check ended. (FrontalAuthentRestController.java:44)       I find the same problem with "proxy started" / "proxy ended", or "doFilter started" / "doFilter ended"... Each time, Splunk gathers the recordings into one       2020-12-07 08:01:43,430 [http-nio-10.108.181.35-30000-exec-3] INFO p.a.f - proxy started. (Proxy.java:106) 2020-12-07 08:01:43,433 [http-nio-10.108.181.35-30000-exec-3] INFO p.a.f. - IDPART = // NUINPE = (Proxy.java:108) 2020-12-07 08:01:43,443 [http-nio-10.108.181.35-30000-exec-3] INFO p.a.f - /3 (ProxyHelper.java:49) 2020-12-07 08:01:43,444 [http-nio-10.108.181.35-30000-exec-3] INFO p.a.f. - /3 (Proxy.java:114) 2020-12-07 08:01:43,907 [http-nio-10.108.181.35-30000-exec-3] INFO p.a.f - proxy ended. (Proxy.java:124)       Do you have an idea ?
Hello, I was wondering if Splunk had any 7.x Universal Forwarders that had known problems or bugs such as not sending logs even though service is running, or other similiar issues that could cause t... See more...
Hello, I was wondering if Splunk had any 7.x Universal Forwarders that had known problems or bugs such as not sending logs even though service is running, or other similiar issues that could cause the process and or service to crash? Regards,
Hi I have field values - A, B, C, D, E, F,G,H,I,J for one of applications. I need output as below.   Product  Alert by Team1                            Alert by Team2 AA           Count(A, B,C,E,... See more...
Hi I have field values - A, B, C, D, E, F,G,H,I,J for one of applications. I need output as below.   Product  Alert by Team1                            Alert by Team2 AA           Count(A, B,C,E,F,G,H,I,J)             Count(D)   Same as different multiple products with different field values. Like --   Product  Alert by Team1                            Alert by Team2 AA           Count(A, B,C,E,F,G,H,I,J)             Count(D)   BB           Count(MM ,AD,FG,TH,KL)           Count(BB)   please help me on this query. Note - I have same field name for all field values.
Hi, POST workflow action be able to send mail to URI location?
The Elasticsearch Data Integrator appears to be not responsive when a non-admin user tries to access the inputs and configuration menu. Any help why this is happening?
i need the download for 32 bit installer. When i launch the command i only get 64 bit.
Hi all, I have stuck in this problem a long time ago. I do not know how to fix this since I do not have privilege to check the limit.conf The problem is: When someone who can view & create report/... See more...
Hi all, I have stuck in this problem a long time ago. I do not know how to fix this since I do not have privilege to check the limit.conf The problem is: When someone who can view & create report/dashboard run the dashboard, can get an accurate result (900K row) When someone who only can view only dashboard/report run the dashboard, the result not showing the same result as number 1 (67K row)    I'm not sure this is related or not to the limit.conf. But, do you guys thinks this is make sense? I have changed a lot of XML to fix this. But all same. I have no idea right now.   FYI, the load job/savedsearch return around 2M row data with 48M event.
We have Enterprise Splunk, connected via Kinesis Stream. We had made online change to increase the no. of shards from 1 to 3 however Splunk Connector is not failed to pick data. Why? Do I require to... See more...
We have Enterprise Splunk, connected via Kinesis Stream. We had made online change to increase the no. of shards from 1 to 3 however Splunk Connector is not failed to pick data. Why? Do I require to restart the connector?  
I need to be able to list  all applications installed on a host with version number and if the application is 32 or 64bit This is my search  index=perfmon sourcetype="WMI:SoftwareVersions" Name=... See more...
I need to be able to list  all applications installed on a host with version number and if the application is 32 or 64bit This is my search  index=perfmon sourcetype="WMI:SoftwareVersions" Name=Teams PackageName!=Teams | dedup host | table _time host Name Version   This work  but what  I am missing is that I need to display if the version is 32 or 64 bit  Regards    
i have use-case to set the default dashboard time from Calendar day 26 to Business day 4 (e.g. Nov 26th to DEC 4th (4th business day in Dec)) this business day always changes from month to month, f... See more...
i have use-case to set the default dashboard time from Calendar day 26 to Business day 4 (e.g. Nov 26th to DEC 4th (4th business day in Dec)) this business day always changes from month to month, for example in Jan, 7th will be 4th business day of the month. I have a lookup which lists the business days as "BD" and calendar days as "CD". I want to use those values and restrict the default dashboard view for each cycle. The dashboard should stick to the cycle(CD26 to BD4) and reset on every cycle of the month. If the dashboard is opened outside of the cycle i.e. 10th CD it should show results up to BD4 only. i am trying to use below query, but unable to get correct results, I appreciate any help provided here. | makeresults | timechart span=1d count | eval day=strftime(_time,"%d"),earliest=if(day>25,"@mon+25d","-1mon@mon+25d") ,today=strftime(_time,"%A") | eval time=strftime(_time,"%Y-%m-%d") ,date=strftime(_time,"%Y%m%d") | join type=left date [| inputlookup abc.csv | eval time=strptime(Date,"%m/%d/%Y"),date=strftime(time,"%Y%m%d")] |eval flag=case(BD=4 AND CD<26,C) | eval latest=case(BD=4 AND CD<26,"@mon+".C."d")
Hi all! I have been trying to automate a task lately, So I'm able to edit one notable event using the API just fine, but I want to edit multiple notables at the same time, it will be a tedious job ... See more...
Hi all! I have been trying to automate a task lately, So I'm able to edit one notable event using the API just fine, but I want to edit multiple notables at the same time, it will be a tedious job to manually go throw each notable event and take the "event_id" one by one! is there a way to make this happened?  I don't know something like selecting the notable events I want to edit from the Enterprise Security incident review page and copy their "event_ip" to a clipboard or something like this? Thanks in advance.
Hello fellow splunkers, I would like to know if someone has come across a way to determine via a splunk query timing attacks, I have read some posts on github pointing out to useful information but ... See more...
Hello fellow splunkers, I would like to know if someone has come across a way to determine via a splunk query timing attacks, I have read some posts on github pointing out to useful information but still nothing concrete. I know we could do something with machine learning but not sure how to deal with it deeply in order to check for so. Thanks so much,
Hi We have a search head cluster with three members, as you know all members have same "default host name". When I try to enable KVStore monitoring in monitoring console it says "Duplicate instance... See more...
Hi We have a search head cluster with three members, as you know all members have same "default host name". When I try to enable KVStore monitoring in monitoring console it says "Duplicate instance name. Ensure each instance has a unique instance (host) name." But because they are member of search head cluster I can't assign unique name to each one. How can I enable KVStore monitoring for search head cluster members?
I'm utilizing Principal Component Analysis (PCA) on a RandomForestRegressor model to process some of the text fields in my data, which results in a certain number of PCA fields (around 30, I would sa... See more...
I'm utilizing Principal Component Analysis (PCA) on a RandomForestRegressor model to process some of the text fields in my data, which results in a certain number of PCA fields (around 30, I would say). The model look good upon the initial `fit` from within the experiment window, so I saved the model and scheduled a training run to occur every morning. However, the scheduled training fails with a 'Usecols do not match columns, columns expected but not found' failure. It normally reports a handful of PC_* fields on the higher end of the range (like PC_27 - PC_31) not being found. The error appears to be related directly to the pandas python library but I don't have the capability to troubleshoot the code itself and hoping to resolve the issue via MLTK. Can anyone assist?
Hi, I am trying to port an app that needs access to x509 details to python 3. Splunk does not ship OpenSSL for python3, only python2 and the new way seems to be using cryptography. But that is also... See more...
Hi, I am trying to port an app that needs access to x509 details to python 3. Splunk does not ship OpenSSL for python3, only python2 and the new way seems to be using cryptography. But that is also not shipped with Splunk 8. On the other hand. Looking at the modules shipped with Python 3 in Splunk 8 I see that they do reference cryptography as a dependency (pyopenssl). Looks a bit weird to me. Theoretically I could just dump cryptography in my app directory. But that also would include a shared object file which seems to be counterproductive when wanting to publish the module outside my organization. Any ideas on how to resolve this? thx afx
I just installed docker and the Splunk Connect for Syslog app(?).  I configured the env_file to point to my http event collector and have configured the indices, and have received the test events. ... See more...
I just installed docker and the Splunk Connect for Syslog app(?).  I configured the env_file to point to my http event collector and have configured the indices, and have received the test events. How do I actually configure listening on a port?  the documentation here: https://splunk-connect-for-syslog.readthedocs.io/en/master/configuration/ says: Other than device filter creation, SC4S is almost entirely controlled by environment variables. Here are the categories and variables needed to properly configure SC4S for your environment. Where do I configure these environmental variables?  Perhaps /opt/sc4s/local/config, but like what file type, what schema?   I mean, I can read, the key/value pair isSC4S_LISTEN_DEFAULT_TLS_PORT=whatever.  but where do I put that? I was trying to set up receiving of firewall logs from pfsense, the documentation for it says: Review and update the splunk_metadata.csv file and set the index and sourcetype as required for the data source. So maybe this is this the answer, I should create a csv?  that doesn't sound right. Probably if I knew Docker I would know the answer to all these questions.  but if anyone could educate me on how to use this, show me some example configurations and show me the filepaths they are located in, I would be deeply appreciative. <edit> Nevermind, I found it.  The answer is, most things are configured in /opt/sc4s/env_file.  indexes and sourcetypes are configured in /opt/sc4s/local/context/splunk_metadata.csv. in the spirit of intellectual honesty, it was in the docs in a couple places, namely the Getting Started section in the os and container specific section, although not in ALL of them.  If I may make a request to the app developers.  I think adding the two paragraph below to the Quickstart Guide would have helped, i think it is an intuitive place to look for people that missed it the first time. Dedicated (Unique) Listening Ports For certain source technologies, categorization by message content is impossible due to the lack of a unique “fingerprint” in the data. In other cases, a unique listening port is required for certain devices due to network requirements in the enterprise. For collection of such sources, we provide a means of dedicating a unique listening port to a specific source. Follow this step to configure unique ports for one or more sources: Modify the /opt/sc4s/env_file file to include the port-specific environment variable(s). Refer to the “Sources” documentation to identify the specific environment variables that are mapped to each data source vendor/technology. Modify index destinations for Splunk Log paths are preconfigured to utilize a convention of index destinations that are suitable for most customers. If changes need to be made to index destinations, navigate to the /opt/sc4s/local/context directory to start. Edit splunk_metadata.csv to review or change the index configuration as required for the data sources utilized in your environment. The key (1st column) in this file uses the syntax vendor_product. Simply replace the index value (the 3rd column) in the desired row with the index appropriate for your Splunk installation. The “Sources” document details the specific vendor_product keys (rows) in this table that pertain to the individual data source filters that are included with SC4S. Other Splunk metadata (e.g. source and sourcetype) can be overriden via this file as well. This is an advanced topic, and further information is covered in the “Log Path overrides” section of the Configuration document.
Hi, Splunk Enterprise resides in on-premises. What would be the capacity of the HEC token? How much logs can be ingested into splunk using 1 HEC token on daily basis?    
Hi All, Is there a way to ingest logs from fluentd to splunk apart from HEC method?  
  I have the below JSON event with nested array in splunk -:   { "index": 2, "rows": [ { "apple": 29 }, { "carrot": 12 }, { "carrot":... See more...
  I have the below JSON event with nested array in splunk -:   { "index": 2, "rows": [ { "apple": 29 }, { "carrot": 12 }, { "carrot": 54, "apple": 23 }, { "carrot": 67, "apple": 9 } ] }   Important thing to consider is that few entries in json array can have one or more missing fields.  I want to write splunk query which would create table like following: index apple carrot 2 29   2   12 2 54 23 2 67 9 I could write a splunk query like following: | makeresults | eval _raw="{ \"index\":2, \"rows\": [ {\"apple\": 29}, {\"carrot\": 12}, {\"carrot\": 54, \"apple\": 23}, {\"carrot\": 67, \"apple\":9} ] }" | spath | spath input=rows | table index,rows{}.apple,rows{}.carrot   But it has two problems, 1- i need separate rows, 2 -  i need to maintain one-to-one mapping of individual columns