All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Good day  I have a splunk 8.2.3 onpremise instance, apparently my system does not want to start, when I execute manual start it gives me the following [root@srvsplunk ~]# systemctl status -l splu... See more...
Good day  I have a splunk 8.2.3 onpremise instance, apparently my system does not want to start, when I execute manual start it gives me the following [root@srvsplunk ~]# systemctl status -l splunk ● splunk.service - SYSV: Splunk indexer service Loaded: loaded (/etc/rc.d/init.d/splunk; bad; vendor preset: disabled) Active: failed (Result: exit-code) since Wed 2022-07-06 16:23:25 -05; 12min ago Docs: man:systemd-sysv-generator(8) Process: 1495 ExecStart=/etc/rc.d/init.d/splunk start (code=exited, status=127) Jul 06 16:23:25 srvsplunk splunk[1495]: Checking kvstore port [8191]: /opt/splunk/bin/splunkd: error while loading shared libraries: libjemalloc.so.2: cannot open shared object file: No such file or directory Jul 06 16:23:25 srvsplunk splunk[1495]: /opt/splunk/bin/splunkd: error while loading shared libraries: libjemalloc.so.2: cannot open shared object file: No such file or directory Jul 06 16:23:25 srvsplunk splunk[1495]: open Jul 06 16:23:25 srvsplunk splunk[1495]: /opt/splunk/bin/splunkd: error while loading shared libraries: libjemalloc.so.2: cannot open shared object file: No such file or directory Jul 06 16:23:25 srvsplunk splunk[1495]: /opt/splunk/bin/splunkd: error while loading shared libraries: libjemalloc.so.2: cannot open shared object file: No such file or directory Jul 06 16:23:25 srvsplunk splunk[1495]: SSL certificate generation failed. Jul 06 16:23:25 srvsplunk systemd[1]: splunk.service: control process exited, code=exited status=127 Jul 06 16:23:25 srvsplunk systemd[1]: Failed to start SYSV: Splunk indexer service. Jul 06 16:23:25 srvsplunk systemd[1]: Unit splunk.service entered failed state. Jul 06 16:23:25 srvsplunk systemd[1]: splunk.service failed. how can i fix my splunk system?? Thanks
All, This is another license utilization report mismatch. I have request to generate license utilization report per day and save it for historical data. I am using the 30 Days License Usage rep... See more...
All, This is another license utilization report mismatch. I have request to generate license utilization report per day and save it for historical data. I am using the 30 Days License Usage report as a base for my daily report:     index=_internal host=licensemaster source=*license_usage.log* type="RolloverSummary" earliest=-1d@d latest=-0d@d | bin _time span=1d | stats sum(b) as sumb last(stacksz) as laststacksz by _time component | eval sumgb=round(sumb/1024/1024/1024, 3) | eval laststackszgb=round(laststacksz/1024/1024/1024, 3)     And it is giving me the result as expected: I want to go further and try to get the license utilization per hour, so I changed the search to:     index=_internal host=licensemaster source=*license_usage.log* type=Usage earliest=-1d@d latest=-0d@d | stats sum(b) as sumb last(poolsz) as lastpoolsz by _time | eval sumgb=round(sumb/1024/1024/1024, 3) | eval lastpoolszg=round(lastpoolsz/1024/1024/1024, 3) | addcoltotals sumb     But the result is lower than than the daily one: 967069668524 bytes is 900.656 Gb. What am I doing wrong? I am running Splunk Enterprise 8.2.6. Thank you, Gerson Garcia
New to Splunk and banging my head against the wall with this problem for over a day now. Please help... Need to compare two different fields from two different events to determine whether the values... See more...
New to Splunk and banging my head against the wall with this problem for over a day now. Please help... Need to compare two different fields from two different events to determine whether the values of those fields match. I ran a search that returns events. All events have an ACCOUNT_NUM field. Depending on the event, it will have either a DATE_TYPE1 field or a DATE_TYPE2 field. The report should display each distinct ACCOUNT_NUM that has one of each DATE_TYPE - so, a column for ACCOUNT NUM, a column for DATE_TYPE1, a column for DATE_TYPE2, and a column for DATE_STATUS ("Match" or "No Match") to indicate whether the two dates match.  So far, I have:   | stats values(DATE_TYPE1) AS "Date One" values(DATE_TYPE2) AS "Date Two" count by ACCOUNT_NUM | where count > 1   This groups the distinct ACCOUNT_NUM and shows me the two DATE_TYPES but how do I indicate whether the two dates match? I tried adding:   | eval DATE_STATUS=if(DATE_TYPE1=DATE_TYPE2, "Match", "No Match")   but this returns "No Match" for all of the events. My understanding is this is because    | eval   is evaluating each event individually. Since no event has both date types, it is not finding a match. How can I get it to compare the date types of each distinct account number as grouped together by my   | stats   command?
Hello everyone, I am trying to ingest data into Splunk and the data is into some .tgz files, but within those files are a lot of different folders and levels of directories, the thing is that I want... See more...
Hello everyone, I am trying to ingest data into Splunk and the data is into some .tgz files, but within those files are a lot of different folders and levels of directories, the thing is that I want to read just one type of file that is into those directories and is not an absolute path is a relative the path can change and can be into any directory. So the inputs .conf was set up with something like this: [monitor:///dir1/dir2/Spk/Test/*.tgz] whitelist=my.log   But this is not working because of this: When you configure wildcards in a file input path, Splunk Enterprise creates an implicit allow list for that stanza. The longest wildcard-free path becomes the monitor stanza, and Splunk Enterprise translates the wildcards into regular expressions.  https://docs.splunk.com/Documentation/Splunk/latest/Data/Specifyinputpathswithwildcards?_gl=1*srk1nm*_ga*MjE0MDA2MDA2MS4xNjI4ODcyNDg2*_gid*MTUyMjkxNTIzLjE2NTY5Njg1NDE.&_ga=2.178037989.152291523.1656968541-2140060061.1628872486   So I am looking the way to filter those logs using whitelisting, should I use regular expressions to filter the logs?   Thank you in advance.
Data is events with a date, username, company,  score. I want to calculate an NPS score by company. detractors = scores 1-6 passive = score 7-8 promoters = score 9-10 NPS=%promoters-%detractors ... See more...
Data is events with a date, username, company,  score. I want to calculate an NPS score by company. detractors = scores 1-6 passive = score 7-8 promoters = score 9-10 NPS=%promoters-%detractors   Help would be highly appreciated.    
If I download from the free trial link, may I use that for the upgrade without problems?
Hi, I am trying to update an incident that was created by an alert action from Splunk ITSI. But, everytime the alert gets triggered, a new incident is getting created instead of updating the existi... See more...
Hi, I am trying to update an incident that was created by an alert action from Splunk ITSI. But, everytime the alert gets triggered, a new incident is getting created instead of updating the existing incident. I tried everything mentioned in the link given below: https://docs.splunk.com/Documentation/AddOns/released/ServiceNow/Commandsandscripts#Update_behavior_for_incidents Please guide as to what needs to be done to update a previously created incident? Should I need to get the status of the incident from ServiceNow and use that in the search query when I try to update the incident?  It would be great if you could help me with any documentation or a video reference that could help me in performing this activity of updating an incident that was created already. Thanks!
Currently, I have HTML within my XML dashboard that will only appear when a certain token is set. However, whenever I go and click "Edit", the HTML is always visible. Is there any way around this?
Hi Team,                     We are reviewing the use cases in our Splunk Enterprise security, We have given Throttling as 1 day for a use case, but want to check how many alerts are being suppresse... See more...
Hi Team,                     We are reviewing the use cases in our Splunk Enterprise security, We have given Throttling as 1 day for a use case, but want to check how many alerts are being suppressed by Throttling action. Is there any search query or any way how to check that.?                     Is there anyway that we can show the proof that throttling is working fine.?                     Thanks in advance for your help.
hello team, i would like to create a dashboard for logs pushed splunk on a regular basis. how do i get a real-time dashboard, for both logs and alerts for applications running on azure/aws. i should... See more...
hello team, i would like to create a dashboard for logs pushed splunk on a regular basis. how do i get a real-time dashboard, for both logs and alerts for applications running on azure/aws. i should be able to see this alerts and take remedy on that.   best regards, mercy 
Hello All, This is my first time posting to Splunk Community. I've found a lot of value here and hope you all are doing well. I have an add-on built with the Splunk Add-on Builder (I believe vers... See more...
Hello All, This is my first time posting to Splunk Community. I've found a lot of value here and hope you all are doing well. I have an add-on built with the Splunk Add-on Builder (I believe version 4.1.0) that contains an alert action that packages up search results and sends them to a HEC input. I am utilizing George Starcher's Python class for sending events to HEC inputs (https://github.com/georgestarcher/Splunk-Class-httpevent). The alert action works perfectly except when I enable the proxy - then I am hit with the error message:     Traceback (most recent call last): File "/opt/splunk/etc/apps/<appname>/bin/<appname>/splunk_http_event_collector.py", line 287, in _batchThread response = self.requests_retry_session().post(self.server_uri, data=payload, headers=headers, verify=self.SSL_verify,proxies=proxies) File "/opt/splunk/etc/apps/<appname>/bin/<appname>/aob_py3/requests/sessions.py", line 635, in post return self.request("POST", url, data=data, json=json, **kwargs) File "/opt/splunk/etc/apps/<appname>/bin/<appname>/aob_py3/requests/sessions.py", line 587, in request resp = self.send(prep, **send_kwargs) File "/opt/splunk/etc/apps/<appname>/bin/<appname>/aob_py3/requests/sessions.py", line 701, in send r = adapter.send(request, **kwargs) File "/opt/splunk/etc/apps/<appname>/bin/<appname>/aob_py3/requests/adapters.py", line 499, in send timeout=timeout, File "/opt/splunk/etc/apps/<appname>/bin/<appname>/aob_py3/urllib3/connectionpool.py", line 696, in urlopen self._prepare_proxy(conn) File "/opt/splunk/etc/apps/<appname>/bin/<appname>/aob_py3/urllib3/connectionpool.py", line 964, in _prepare_proxy conn.connect() File "/opt/splunk/etc/apps/<appname>/bin/<appname>/aob_py3/urllib3/connection.py", line 359, in connect conn = self._connect_tls_proxy(hostname, conn) File "/opt/splunk/etc/apps/<appname>/bin/<appname>/aob_py3/urllib3/connection.py", line 506, in _connect_tls_proxy ssl_context=ssl_context, File "/opt/splunk/etc/apps/<appname>/bin/<appname>/aob_py3/urllib3/util/ssl_.py", line 453, in ssl_wrap_socket ssl_sock = _ssl_wrap_socket_impl(sock, context, tls_in_tls) File "/opt/splunk/etc/apps/<appname>/bin/<appname>/aob_py3/urllib3/util/ssl_.py", line 495, in _ssl_wrap_socket_impl return ssl_context.wrap_socket(sock) File "/opt/splunk/lib/python3.7/ssl.py", line 423, in wrap_socket session=session File "/opt/splunk/lib/python3.7/ssl.py", line 827, in _create raise ValueError("check_hostname requires server_hostname") ValueError: check_hostname requires server_hostname      Has anyone come across similar behavior? I am trying a variety of different things but this has quickly gone over my head. Any help or direction would be greatly appreciated. Please let me know what information I can provide. Thank you.
We're migrating the Splunk from On-Premise environment to Cloud, and are done with setting up forwarders to send the data to Splunk Cloud. However, we have a large number of alerts, reports and dashb... See more...
We're migrating the Splunk from On-Premise environment to Cloud, and are done with setting up forwarders to send the data to Splunk Cloud. However, we have a large number of alerts, reports and dashboards created on Splunk On-Premise. Is there a way to transfer these (alerts, reports, and dashboards) from Splunk On-Premise to Splunk Cloud?  Thanks.
Python script to download dashboard as image or send dashboard as html mail with header footer using python or client side script. Is there a way to authenticate using sso or token to splunk url curr... See more...
Python script to download dashboard as image or send dashboard as html mail with header footer using python or client side script. Is there a way to authenticate using sso or token to splunk url current it used microsoft sso.
Hi, I have now filled out this web form twice in the last 24 hours to join the Splunk Usergroups Slack channel but I still have not received any reply as expected: https://docs.google.com/forms/d/... See more...
Hi, I have now filled out this web form twice in the last 24 hours to join the Splunk Usergroups Slack channel but I still have not received any reply as expected: https://docs.google.com/forms/d/e/1FAIpQLSd2PXSBiatZvCIpdE2wPFgnrUM29HBYjrkI0iDhlx26RwwE4A/viewform     Can anyone help as I need to join this Slack group for my current development work. Many thanks,
Hello community. I use splunk for one of my projects and i had a doubt. I have a query which roughly looks like below     index=app* rum.plugin="myPluginId" rum.status="Error" rum.apiCall="apiC... See more...
Hello community. I use splunk for one of my projects and i had a doubt. I have a query which roughly looks like below     index=app* rum.plugin="myPluginId" rum.status="Error" rum.apiCall="apiCallName" | chart count by rum.companyId     which gives the result like rum.companyId       ||        count ======================== 456789456              ||         6 827634966              ||         2 456789057              ||         4 098765456              ||         6 123456789              ||         677 And i run this query for last 24 hours. Now i want to check, if out of these companyIds listed, whether there was a similar Error occurred for these list of companies (rum.companyId) in past. If it has occurred, show the timestamp of first occurrence. So my expected output is something like rum.companyId       ||        count     ||. First occurrence Timestamp ================================================ 456789456              ||         6              ||. 20/04/90 04:04:04 827634966              ||         2              ||  20/04/90 04:04:04 456789057              ||         4              ||  20/04/90 04:04:04 098765456              ||         6              ||  20/04/90 04:04:04 123456789              ||         677         ||  20/04/90 04:04:04 Is there any way to achieve this? Thanks in advance.
Hey all, I have been needing a pdf of the .conf22 for this year to send to my job. I cannot download a pdf off the app. I need the full schedule from the Splunk university and the week conference. ... See more...
Hey all, I have been needing a pdf of the .conf22 for this year to send to my job. I cannot download a pdf off the app. I need the full schedule from the Splunk university and the week conference. Where can I download this pdf to send off? Thanks!
I'm posting this in case someone else has the problem I struggled with. I had was calculating a list of upload and download totals per webdomain per company location into a list.  The format of the ... See more...
I'm posting this in case someone else has the problem I struggled with. I had was calculating a list of upload and download totals per webdomain per company location into a list.  The format of the table was such that I ended up with the company location, followed by a multivalued list of the web domains, and a multivalued list of the bytes totals.  The bytes totals being 7 to 8 digit numbers are easier to read with commas but the usual formatting solution: eval Download=tostring(Download, "commas") eval Upload=tostring(Upload, "commas") Had mixed results depending on where in the query I placed it.  After the initial transformation command, it messed up my sorting since now it was a string.  At the end, it summed the multi-value field and then put the commas in so that didn't help.  
I recently discovered that "tstats" is returning sourcetypes which do not exist.  Query:  | tstats values(sourcetype) where index=* by index  This returns a list of sourcetypes grouped by index... See more...
I recently discovered that "tstats" is returning sourcetypes which do not exist.  Query:  | tstats values(sourcetype) where index=* by index  This returns a list of sourcetypes grouped by index. While it appears to be mostly accurate, some sourcetypes which are returned for a given index do not exist. For example, the sourcetype "WinEventLog:System" is returned for myindex, but the following query produces zero results:  index=myindex sourcetype="WinEventLog:System" This is the case for multiple indexes. If my understanding of "tstats" is correct, it works by only analyzing indexed fields which are stored in the tsidx files. If no events exist with a given sourcetype for a specific index, how could that value have possibly been saved in the tsidx files? 
Hi Splunkers, This may be easy, but I'm not able to solve it if anyone can help. I want to set a lower threshold to 15 standard deviation below the mean, and the upper threshold to 15 standard de... See more...
Hi Splunkers, This may be easy, but I'm not able to solve it if anyone can help. I want to set a lower threshold to 15 standard deviation below the mean, and the upper threshold to 15 standard deviation above the mean, but I'm not sure how to implement that.  Thanks!
I’m trying to get a count for activity on around 10 different APIs. The search is: index=api_logs | bin span=5min _time | stats count by _time, APIName Is it possible to use stats count so the out... See more...
I’m trying to get a count for activity on around 10 different APIs. The search is: index=api_logs | bin span=5min _time | stats count by _time, APIName Is it possible to use stats count so the output includes an entry for each API in each 10 minute period and report a ‘0’ if there hasn’t been a call. I know you could chart it but I’d like the data in this particular format.