All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, i looked for an answer an some came close. But i could not get it flying. Here is the Problem Description: I have a field that contains the status of a ticket ("created_done"). I can easily c... See more...
Hi, i looked for an answer an some came close. But i could not get it flying. Here is the Problem Description: I have a field that contains the status of a ticket ("created_done"). I can easily count the number using by or doing that: | stats count(eval(created_done="created")) as created count(eval(created_done="done")) as done by title impact However i would like something like this: | stats count by title impact status status at this point should be a field holding the sum of solved tickets and the sum of open tickets: Title Impact Status Count title 1 impact 1 solved 90 title 1 impact 1 open 5 title 1 impact 2 solved 45 title 1 impact 2 open 3   Probably this has already been answered, i apologize in advance, but i could not get any solution working.   Kind regards, Mike
every time I modify my Pass4symm key in outputs.conf needed to forward to a different Splunk environment it ends up getting rewritten to the pass4symm key that is in the server.conf    how do I set... See more...
every time I modify my Pass4symm key in outputs.conf needed to forward to a different Splunk environment it ends up getting rewritten to the pass4symm key that is in the server.conf    how do I set a pass4symm key to 2 different keys one for my Splunk environment and one to other environment i need to forward logs to  
Hi all, During a scan of our infra, our system detected that in splunk version 8.1.7.1 , there is still present log4j library vulnerable to CVE-2021-44228. prior to the upgrade to this version we de... See more...
Hi all, During a scan of our infra, our system detected that in splunk version 8.1.7.1 , there is still present log4j library vulnerable to CVE-2021-44228. prior to the upgrade to this version we deleted the compromised files following the workaround in the Splunk Blog about this topic. As this version has that vulnerability patched, I´d like to know how the process works, as there are log4j files in the affected version present. Thanks in advance for your help. Best regards.
Hi Splunkers, Is it feasible to collect data from a DB2/AS400 server using Splunk? i.e. to collect required data which is stored in a database (DB2 Database) hosted on a AS400 server.   Thanks in... See more...
Hi Splunkers, Is it feasible to collect data from a DB2/AS400 server using Splunk? i.e. to collect required data which is stored in a database (DB2 Database) hosted on a AS400 server.   Thanks in Advance! Cheers!   
Hello,  Can anybody recommend an Add-on for finding reputation of an IP  in search results ? With High hopes , i downloaded the Virustotal app https://splunkbase.splunk.com/app/4283/#/details , bu... See more...
Hello,  Can anybody recommend an Add-on for finding reputation of an IP  in search results ? With High hopes , i downloaded the Virustotal app https://splunkbase.splunk.com/app/4283/#/details , but was disappointed to find out that it does not show reputation score for an IP field.  It does show for File hashes, Domain and URLs but not IPs.   Requirement is for a TA or add-on that we can use in our  own searches and get the ip reputation as a field in the results
I added a new index to my enterprise server, but on the indexer I cannot add it because it will not allow me to select the custom app.
Hi, I have requirement like the drilldown panel query should be changed based on the token value passed from the parent panel right now the condition is I have a parent panel token which may pass e... See more...
Hi, I have requirement like the drilldown panel query should be changed based on the token value passed from the parent panel right now the condition is I have a parent panel token which may pass either SUCCESS or FAILURE as value If it FAILURE the drilldown panel should execute different query and for SUCCESS it should execute different one
I got the 'Phantom Community Edition - Access Granted' mail. But regi link was expired. I can't access. https://my.phantom.us/registration_complete?token=ogxEhOMPccGvtrxBBIES6bhs61fpIdDspcGa2lyca... See more...
I got the 'Phantom Community Edition - Access Granted' mail. But regi link was expired. I can't access. https://my.phantom.us/registration_complete?token=ogxEhOMPccGvtrxBBIES6bhs61fpIdDspcGa2lyca44w4opts6XQuEzTA3WQDJFY Could you send the new link?
I want to limit the search that matches the "dest" values which are a part of lookup Currently I am getting all events Lookup: host.csv  lookup columns: aa bb I tried something like below: ... See more...
I want to limit the search that matches the "dest" values which are a part of lookup Currently I am getting all events Lookup: host.csv  lookup columns: aa bb I tried something like below:    |tstats summariesonly=f count min(_time) as firstTime max(_time) as lastTime from datamodel=Endpoint.Processes where (condition)[|inputlookup host.csv |fields + aa |rename aa as Processes.dest] by Processes.dest     Any help would be appreciated! Thanks
I want to have a search, the output of which is the next search stream, provided that each occurred at a common time. For example: from a source with a specific port is connected to several destinat... See more...
I want to have a search, the output of which is the next search stream, provided that each occurred at a common time. For example: from a source with a specific port is connected to several destinations, and then the search destinations are the first source of the next search, provided that each occurred at the same time. search1: index=fgt src=172.26.122.1 dest_port=443 (dest=172.20.120.1 OR dest=172.20.120.2) | stats count by src,dest,_time search 2: search1 (src=172.20.120.1 OR src=172.20.120.2) | stats count by src,dest,_time
Hi all, after the last Windows update (JAN-2022) a windows_TA input blacklist filter for security logevents does not work anymore. before it worked fine. the black filter looks like this: blacklist... See more...
Hi all, after the last Windows update (JAN-2022) a windows_TA input blacklist filter for security logevents does not work anymore. before it worked fine. the black filter looks like this: blacklist =EventCode="(4634|4672)" Message="Account\sName:\s+(?i)([\S+]+[\$]|serviceaccount1|serviceaccount2)" the blacklist should filter out computer accounts and other service accounts for certain eventcode. has someone the same problem/ can someone help with that? Thanks a lot
Hi All, I want to extract the following word from sentence: nodeUrl=https://sappbos.aexp.com/odata.svc/v1.0/BlazeoData/UddsSappSupplierPymntAccpts? nodeUrl=https://merchantcompass.aexp.com/odata.s... See more...
Hi All, I want to extract the following word from sentence: nodeUrl=https://sappbos.aexp.com/odata.svc/v1.0/BlazeoData/UddsSappSupplierPymntAccpts? nodeUrl=https://merchantcompass.aexp.com/odata.svc/v1.0/BlazeoData/MerchantCompassLookups Can someone please guide me.
Hi, I want to integrate my website to Splunk to track pageviews, session etc., is there there any tutorial about this? 
I am trying to get data into Splunk to show the members of the local / builtin windows groups. In particular "Administrators" and "Remote Desktop Users" Utilizing the Splunk Forwarder. I am using a ... See more...
I am trying to get data into Splunk to show the members of the local / builtin windows groups. In particular "Administrators" and "Remote Desktop Users" Utilizing the Splunk Forwarder. I am using a WMI (WQL) query to do this via wmi.conf (C:\Program Files\SplunkUniversalForwarder\etc\apps\Splunk_TA_windows\local\wmi.conf) This stanza currently works: (FYI: Fakenameofserver = hostname) disabled = 0 ## Run once per day ## edited interval = 86400 wql = ASSOCIATORS OF {win32_group.Domain="Fakenameofserver",Name="Administrators"} where assocClass=win32_groupuser Role=GroupCompOnent ResultRole=Partcomponent index = window I don't want to have to prefill the wql queries in the wmi.conf file with the server name on each server. How do i use an environmental or Splunk variable to replace "Fakenameofserver" with the name of the host the Splunk forwarder is running on. I have tried a number of combinations of $host, %host%, %servername%, %computername% etc etc. Everytime i restart the forwarder to force the query to run i don't get any data into splunk and the log file says: Error occurred while trying to retrieve results from a WMI query (error="Object cannot be found." HRESULT=80041002) (root\cimv2: ASSOCIATORS OF {win32_group.Domain="%VARIABLENAME%",Name="Remote Desktop Users"} where assocClass=win32_groupuser Role=GroupCompOnent ResultRole=Partcomponent)   Has anyone had success with this and can you suggest how i can get the stanza to resolve the variable into the value when it queries? Where should i define the variables (if required) and what syntax do i use when writing these in the wql query? Thanks for any suggestions.
hi why my background color property dont works while the font size property works? thanks   <row> <panel> <html> <style> <b> <font size="4" face="verdana" col... See more...
hi why my background color property dont works while the font size property works? thanks   <row> <panel> <html> <style> <b> <font size="4" face="verdana" color="blue" background-color="white"> <center> </center> </font> </b> </style> </html> </panel> </row>  
I am trying to find frequently used search filters from my application log. I have written a below query to extract a json from the log and store it in the search_filter variable index="*" "Searchi... See more...
I am trying to find frequently used search filters from my application log. I have written a below query to extract a json from the log and store it in the search_filter variable index="*" "Searching records" | rex field=_raw "(?P<search_filter>\{.*\})" | eval search_filter=replace(search_filter,"\\\\\"","\"") The resulting json in the search_filter variable for one event looks like below { "pageSize": 0, "offset": 0, "criteria":[ { "field": "status", "operator": "equalsIgnoreCase", "values":["REPROCESS"] }, { "field": "id", "operator": "equals", "values":["352353"] }] } Now I want to convert this json in the below format and then finally sort this array and list it in the table ordered by count. [status##equalsIgnoreCase,id##equals] I tried doing the below  index="*" "Searching records" | rex field=_raw "(?P<search_filter>\{.*\})" | eval search_filter=replace(search_filter,"\\\\\"","\"") | eval cfo=json_extract(search_filter, "criteria{}.field", "criteria{}.operator") | eval cf=json_extract(cfo,"{0}") | eval co=json_extract(cfo,"{1}") | eval cfos=mvzip(cf, co, "##") This results in cfos are like this which is not what I want. I am not able to use mvzip on the json array. ["status","id"]##["equalsIgnoreCase","equals"]  Any suggestions on how to go about it or is there a better way in finding  frequently used filters in my scenario.
Hello,  I'm currently working on configuring SSL from a UF sitting on a Windows server to a HF running on RHEL 7. I am using third party certs that I obtained from my lab windows PKI environment th... See more...
Hello,  I'm currently working on configuring SSL from a UF sitting on a Windows server to a HF running on RHEL 7. I am using third party certs that I obtained from my lab windows PKI environment that has two tiers of CA (RootCA/SubCA) with the HF having a signed cert (.pem file) as well as each UF sharing a single cert that was signed by the same Subordinated CA. I've managed to successfully achieve/confirm this SSL connection is happening using the Validate your configuration doc by Splunk, but the configuration seems to contradict itself so I would appreciate some insight into where I may have gone wrong that resulted in this.  The contradiction is within the requireClientCert parameter on the Splunk HF, as well as the clientCert parameter on the Universal Forwarder. I have tested several times after restarting the Splunk service on both workstations and this connection ONLY works when I have both clientCert configured within the outputs.conf as well as requireClientCert = false. If either of these variables are changed or I remove the clientCert parameter (even though it should be technically not required) I have a connection error.  For example, with the configuration below, if I changed requireClientCert = true the entire connection would fail, even though I believe the clientCert .pem file is configured correctly.  My certificate chains for each host are as follows: HF:  adcs.pem (shared with clients via deployment server, to be used for sslRootCaPath parameter) <intermediate (subordinate/issuing) ca certificate> <root ca certificate> heavyforwarder01.pem (certificate chain to be used for serverCert parameter) <hf01.pem cert issued by subordinate ca> <decrypted rsa key> <subca_pub.pem> <rootca_pub.pem> UF:  server.conf sslRootCAPath = C:/path/to/adcs.pem universalforwarder01.pem (certificate chain to be used for clientCert parameter) <uf01.pem certificate issued by subordinate CA> <encrypted rsa key for cert> <subca_pub.pem> <rootca_pub.pem> Configuration for outputs.conf on Universal Forwarder: [tcpout] defaultGroup=splhf01 [tcpout:splhf01] disabled=0 server = splhf01.domain.local:9998 clientCert = C:\path\to\universalforwarder01.pem sslPassword = <redacted> useClientSSLCompression = true sslCommonNameToCheck = splhf01.domain.local sslVerifyServerCert = true Configuration for server.conf on Universal forwarder: [sslConfig] sslRootCAPath = C:\path\to\combined\adcs.pem   Configuration for inputs.conf on Heavy Forwarder/Indexer: [default] host = splhf01 [splunktcp:9997] disabled = 0 [splunktcp-ssl:9998] disabled = 0 [SSL] serverCert = /opt/splunk/etc/auth/ssl/s2s/heavyforwarder01.pem requireClientCert = false sslVersions *,-ssl2 sslCommonNameToCheck = splhf01.domain.local Configuration for Heavy forwarder server.conf <..> [sslConfig] sslRootCAPath = /path/to/combined/adcs.pem <...>
Hi guys I am definitely a splunk novice. I want to run a search with the splunk REST API. it is a tstats on a datamodel. the issue i am facing is that the result take extremely long to return. when... See more...
Hi guys I am definitely a splunk novice. I want to run a search with the splunk REST API. it is a tstats on a datamodel. the issue i am facing is that the result take extremely long to return. when i run the same search on the front end its extremely fast but via the rest API for 3 results it takes between 7-10 min where as on the front end it returns the results quickly. my search is structured as follows: https://<server>:8089/services/search/jobs -d " search= |  tstats summariesonly=1 values(<value>) (there are a few values after this) from datamodel=<datamodel name> WHERE (some values for the values option before the from) | head 3" -d earliest= -5m@m -d latest =now -d output_mode=json So when running an index search on which the datamodel is built (which is slower on the front end) it returns results as soon as i run the /results endpoint but the datamodel search takes extremely long. Any ideas as to what my problem could be. The search does eventually return the results but it takes very long for the result size im requesting.
Hi guys I'm trying to run a search to the /jobs endpoint. however I get a  bash: syntax error near unexpected token `(' error message. my search has quotes in it for a | rex command and I tried e... See more...
Hi guys I'm trying to run a search to the /jobs endpoint. however I get a  bash: syntax error near unexpected token `(' error message. my search has quotes in it for a | rex command and I tried escaping the quotes with the \ but is till seem to get the issue. when using the \ I get a  <msg type="ERROR">Unparsable URI-encoded request data</msg> error. My search is structured as follows: |  tstats summariesonly=1 values(<values>) ....(there are a lot of these) from datamodel=<name> WHERE (some values for the previous section) | lookup <lookup> | rex field=<name> "(?<new field name>[^.]{9}$)" ...  there are about 4 lookups in total and 2 rex command. however when i try to escape in the rex command I get the Unparsebale URI error.   Anybody come across this error before?  
I'm still new, and struggling with the following. I am looking at a set of data from three probes. If all three probes show success of "down" for the reporting period than I would like the result to ... See more...
I'm still new, and struggling with the following. I am looking at a set of data from three probes. If all three probes show success of "down" for the reporting period than I would like the result to be "down", but if one or more have a success of "up" than it should be considered "up" for that time span. For instance: 12:00 - Probe1 - Up 12:00 - Probe2 - Down 12:00 - Probe3 - Up 13:00 - Probe1 - Down 13:00 - Probe2 - Down 13:00 - Probe2 - Down So, for 12:00 the value would be Up, and for 13:00 the value would be down...