All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello all, Can I generate scheduled PDF delivery from the command line? I want to export the dashboard as a PDF file and send it via email. Is that possible? May I know the solution if it's possible?
I have an Index B  which has job_name and job_status details and another index A which has ticket number and job_name... Index A to be joined with Index B to get the below details ticket_number,job... See more...
I have an Index B  which has job_name and job_status details and another index A which has ticket number and job_name... Index A to be joined with Index B to get the below details ticket_number,job_name and job_status I tried join type= inner job_name based on job_name but i get empty results for job_status what might be going wrong?
Hi everyone, Currently we're dealing with an odd one on the Enterprise search head (we're using 8.2.3).  We have multiple roles which grant access to certain indexes to search into, and sometim... See more...
Hi everyone, Currently we're dealing with an odd one on the Enterprise search head (we're using 8.2.3).  We have multiple roles which grant access to certain indexes to search into, and sometimes randomly, people in the company start complaining that their searches are forbidden. After doing some quick investigation I've seen that in the roles the guys are having, from their (let's say for example sake) 5 allowed indexes, one was being unchecked (i.e. removed). Do you know why this behavior is happening? Cheers, Darius
Hi, I have a table like this :  part_of_url count /test1 1 /test2 2 /test3 3   I want to drilldown with a link which open a new tab by clicking on cell of "part_of_url" colum... See more...
Hi, I have a table like this :  part_of_url count /test1 1 /test2 2 /test3 3   I want to drilldown with a link which open a new tab by clicking on cell of "part_of_url" column. I treid this code : <drilldown> <condition field="part_of_url"> <link target="_blank"> <![CDATA[https://url.net$click.value$-/results/ ]]> </link> </condition> </drilldown> But it did not work. Can you help me ?
Hi, I have a table like this :  test count test AA 1 test AB 2 test C 3   I want to merge "test AA" and "test AB" which will give me a count of 3 together. What I want : t... See more...
Hi, I have a table like this :  test count test AA 1 test AB 2 test C 3   I want to merge "test AA" and "test AB" which will give me a count of 3 together. What I want : test count test A 3 test C 3   How Can I do that ?
We are transferring log using log drains and using token created using HTTP event collector.  We need to filter data entering into splunk cloud logs. Few keywords we want to eliminate all-together so... See more...
We are transferring log using log drains and using token created using HTTP event collector.  We need to filter data entering into splunk cloud logs. Few keywords we want to eliminate all-together so reduced the size of our ingestion. So around 50% of the data being ingested is not required and its coming from third party which don't have controllable log levels. How can we avoid data by these keywords and prevent it being ingested into splunk. Or is there way to filter data after we get the data in splunk to reduce the ingestion size ?  Thanks,Dee
I have data coming in which has field as "Log_date" in DD/MM/YYY format. i need to show last 7 days data from today in the dashboard. I used below to filter last 7 days data, but its not showing as ... See more...
I have data coming in which has field as "Log_date" in DD/MM/YYY format. i need to show last 7 days data from today in the dashboard. I used below to filter last 7 days data, but its not showing as results. where Log_date>=relative_time(Log_date, "-6d@d") I need to use date only from the field "Log_date". Can some one please guide.
Hi, I am new to Splunk and running both Splunk Enterprise and Universal Forwarder in a Docker container (on the same host for now). My forwarder keeps shutting down, and I am not quite sure why. Pr... See more...
Hi, I am new to Splunk and running both Splunk Enterprise and Universal Forwarder in a Docker container (on the same host for now). My forwarder keeps shutting down, and I am not quite sure why. Probably a configuration issue. This is the last error I find in the logs before the shutdown:       TASK [splunk_common : Test basic https endpoint] ******************************* fatal: [localhost]: FAILED! => { "attempts": 60, "changed": false, "elapsed": 10, "failed_when_result": true, "redirected": false, "status": -1, "url": "https://127.0.0.1:8089" } MSG: Status code was -1 and not [200, 404]: Request failed: <urlopen error _ssl.c:1074: The handshake operation timed out> ...ignoring         after that there is a bit more in the logs, before it finally stops working:       Monday 20 December 2021 09:57:02 +0000 (0:17:30.719) 0:18:04.687 ******* TASK [splunk_common : Set url prefix for future REST calls] ******************** ok: [localhost] Monday 20 December 2021 09:57:02 +0000 (0:00:00.088) 0:18:04.775 ******* included: /opt/ansible/roles/splunk_common/tasks/clean_user_seed.yml for localhost Monday 20 December 2021 09:57:02 +0000 (0:00:00.214) 0:18:04.989 ******* [WARNING]: Using world-readable permissions for temporary files Ansible needs to create when becoming an unprivileged user. This may be insecure. For information on securing this, see https://docs.ansible.com/ansible/user_guide/become.html#risks-of-becoming-an- unprivileged-user TASK [splunk_common : Remove user-seed.conf] *********************************** ok: [localhost] Monday 20 December 2021 09:57:03 +0000 (0:00:00.653) 0:18:05.643 ******* included: /opt/ansible/roles/splunk_common/tasks/add_splunk_license.yml for localhost Monday 20 December 2021 09:57:03 +0000 (0:00:00.228) 0:18:05.871 ******* TASK [splunk_common : Initialize licenses array] ******************************* ok: [localhost] Monday 20 December 2021 09:57:03 +0000 (0:00:00.082) 0:18:05.954 ******* TASK [splunk_common : Determine available licenses] **************************** ok: [localhost] => (item=splunk.lic) Monday 20 December 2021 09:57:03 +0000 (0:00:00.117) 0:18:06.072 ******* included: /opt/ansible/roles/splunk_common/tasks/apply_licenses.yml for localhost => (item=splunk.lic) Monday 20 December 2021 09:57:03 +0000 (0:00:00.162) 0:18:06.235 ******* Monday 20 December 2021 09:57:03 +0000 (0:00:00.074) 0:18:06.309 ******* Monday 20 December 2021 09:57:03 +0000 (0:00:00.078) 0:18:06.387 ******* Monday 20 December 2021 09:57:03 +0000 (0:00:00.077) 0:18:06.465 ******* included: /opt/ansible/roles/splunk_common/tasks/licenses/add_license.yml for localhost Monday 20 December 2021 09:57:04 +0000 (0:00:00.141) 0:18:06.606 ******* Monday 20 December 2021 09:57:04 +0000 (0:00:00.079) 0:18:06.686 ******* [WARNING]: Using world-readable permissions for temporary files Ansible needs to create when becoming an unprivileged user. This may be insecure. For information on securing this, see https://docs.ansible.com/ansible/user_guide/become.html#risks-of-becoming-an- unprivileged-user TASK [splunk_common : Ensure license path] ************************************* ok: [localhost] Monday 20 December 2021 09:57:04 +0000 (0:00:00.622) 0:18:07.308 ******* Monday 20 December 2021 09:57:04 +0000 (0:00:00.074) 0:18:07.382 ******* Monday 20 December 2021 09:57:04 +0000 (0:00:00.108) 0:18:07.491 ******* Monday 20 December 2021 09:57:05 +0000 (0:00:00.076) 0:18:07.568 ******* included: /opt/ansible/roles/splunk_universal_forwarder/tasks/../../../roles/splunk_common/tasks/set_as_hec_receiver.yml for localhost Monday 20 December 2021 09:57:05 +0000 (0:00:00.118) 0:18:07.686 ******* [WARNING]: Using world-readable permissions for temporary files Ansible needs to create when becoming an unprivileged user. This may be insecure. For information on securing this, see https://docs.ansible.com/ansible/user_guide/become.html#risks-of-becoming-an- unprivileged-user         outputs.conf:       [indexAndForward] index = false [tcpout] defaultGroup = default-autolb-group [tcpout:default-autolb-group] server = splunk:9997 disabled = false         inputs.conf:       [monitor:///docker/py-sandbox/log/*.json] disabled = 0 [monitor:///docker/fa/log/*.json] disabled = 0         server.conf (i removed the keys/ssl pw, I never created those myself though):       [general] serverName = synoUniFW pass4SymmKey = <somekey> [sslConfig] sslPassword = <some pw> [lmpool:auto_generated_pool_forwarder] description = auto_generated_pool_forwarder quota = MAX slaves = * stack_id = forwarder [lmpool:auto_generated_pool_free] description = auto_generated_pool_free quota = MAX slaves = * stack_id = free         In the Splunk Enterprise I have opened port 9997 and it actually receives logs, until it doesn't... What am I doing wrong? thanks!
Hello all, I would like to generate a scheduled PDF delivery dashboard for bi-weekly (every two weeks) on Friday. But if I use " 0 0 */14 * * ", the schedule doesn't run exactly every 14 days. It runs... See more...
Hello all, I would like to generate a scheduled PDF delivery dashboard for bi-weekly (every two weeks) on Friday. But if I use " 0 0 */14 * * ", the schedule doesn't run exactly every 14 days. It runs only on the 14th and 29th days of the month. And I want to run it on only Friday of every two weeks. So, I try cron notation " 0 0 */14 * 5 ". But, the schedule run on not only the 14th and 29th days of the month, but also every Friday. What kind of notation has to be used to meet both conditions for every 14 days and on Friday? And also is it possible to run the cron schedule at the end of every month?
Hi,   I am trying this cmd   index="wineventlog" host IN (*) EventCode=6006 OR EventCode="6005" Type=Information | transaction host startswith=6006 endswith=6005 maxevents=2 | eval duration = ... See more...
Hi,   I am trying this cmd   index="wineventlog" host IN (*) EventCode=6006 OR EventCode="6005" Type=Information | transaction host startswith=6006 endswith=6005 maxevents=2 | eval duration = duration + " Sec" | table _time host duration   If the output in sec.. i need sec... if its in minutes then minutes .. if its in hours and if in days it should calculate the same.   Thanks in Advance
Hi, I need a help with a query to display the count based on a particular message. For example, "Failed project on ABC", the query basically should read and count 2 and if it's greater than 2 , shou... See more...
Hi, I need a help with a query to display the count based on a particular message. For example, "Failed project on ABC", the query basically should read and count 2 and if it's greater than 2 , should display the number I tried something like this, but not working index="Project" | stats count(eval(message like("%Failed Project on%")) | where count>2 Could someone suggest way of achieving this?   /nanoo1  
Hey guys,  I am a nebbie with Splunk, but already fell in love with it. Such a great tool!  I was tasked with storing settings of a website from Cloudflare into Splunk. Without much of a knowledge ... See more...
Hey guys,  I am a nebbie with Splunk, but already fell in love with it. Such a great tool!  I was tasked with storing settings of a website from Cloudflare into Splunk. Without much of a knowledge I wrote a small Python script that basically gets settings data from CF and sends it to Splunk via HEC token, on my local instance. This is one of the ways of doing it, but I'm sure there must be much slicker way.  Question is, what would you guys recommend to achieve this task? What would be the best practices?    Thanks in advance,  Vadim
My current panel looks like this How to make it look like doughnut like below  With the total inside the hollow center like this   Is there any way to do this, please let me know.
Hello, Is it possible to user OR with regex? For example i have search | regex something="", and I need | regex something="" OR | regex something2="" Maybe should I change construction? Any ideas?... See more...
Hello, Is it possible to user OR with regex? For example i have search | regex something="", and I need | regex something="" OR | regex something2="" Maybe should I change construction? Any ideas?   Thank you.
Hello Splunkers, In my Splunk Dashboard, i have a splunk table, when clicking on any cell value of defined field name, a splunk panel, opens up, below splunk table. Now the drilldown panel is not a... See more...
Hello Splunkers, In my Splunk Dashboard, i have a splunk table, when clicking on any cell value of defined field name, a splunk panel, opens up, below splunk table. Now the drilldown panel is not a table view, neither a chart view, its a form, a HTML form. How can i show this drilldown panel, the form view in pop-up window?   TIA,
How can I show organization's banner at the top of Splunk Dashboard. In my case it is coming below the label & filters. Below is my code snippet.   <dashboard theme="light" hideChrome="true" h... See more...
How can I show organization's banner at the top of Splunk Dashboard. In my case it is coming below the label & filters. Below is my code snippet.   <dashboard theme="light" hideChrome="true" hideExport="true"> <label>*Welcome To Lending Command Center*</label> <description>Lending Command Center is a collection of visualized data which will provide the client with details on its business at various levels of technology and operations.</description> <row> <html> <center><img src="/static/app/search/Header-black@2x-100.jpg" ></img></center> </html> </row>      
Hi, Search 1: It is used to findout the server health index=win sourcetype="xmlwineventlog" host=Prod_UI_* | eval Status=if(EventCode=41 OR EventCode=6006 OR EventCode=6008,"Down","Up") | dedup h... See more...
Hi, Search 1: It is used to findout the server health index=win sourcetype="xmlwineventlog" host=Prod_UI_* | eval Status=if(EventCode=41 OR EventCode=6006 OR EventCode=6008,"Down","Up") | dedup host | table Status Search 2 : It is used to findout the CPU Usage | mstats avg(_value) prestats=true WHERE metric_name="Processor.%_Processor_Time" AND "index"="winperf" AND host=Prod_UI_* span=60s | eval Timestamp=strftime(_time ,"%d/%m/%Y %H:%M:%S") | stats avg(_value) AS CPU BY host Timestamp | where CPU>=0 | eval Status="Critical",CPU=round(CPU,2),CPU=CPU+"%" | rename host as Hostname | table Timestamp Hostname CPU Status | dedup Hostname | sort - CPU Search 3 : It is used to findout the Memory Usage | mstats avg(_value) prestats=true WHERE metric_name="Memory.%_Committed_Bytes_In_Use" AND "index"="winperf" AND host=Prod_UI_* span=60s | eval Timestamp=strftime(_time ,"%d/%m/%Y %H:%M:%S") | stats avg(_value) AS Memory BY host Timestamp | where Memory>=0 | eval Status="Critical",Memory=round(Memory,2),Memory=Memory+"%" | rename host as Hostname | table Timestamp Hostname Memory Status | dedup Hostname | sort - Memory finally I need a query to know the server health and CPU and memory usage in single table. and if CPU used >75% or memory used >75% it should show that the server is Down. Like Hostname    Health    CPU    Memory Google           Down      76%      45%
Hello all, One of our home grown apps copies logs to a directory monitored by Splunk once a day around midnight. Splunk, however, will not index the events in the log if they contain a past time sta... See more...
Hello all, One of our home grown apps copies logs to a directory monitored by Splunk once a day around midnight. Splunk, however, will not index the events in the log if they contain a past time stamp. The lines in the log look similar to this: 12/18/2021,00:00:20,UDP,Rcv,10.132.133.29,app-measurement.com   These lines are skipped, however, if the line looks like this it will be indexed: UDP,Rcv,10.132.133.29,app-measurement.com   It appears having a date and time in the log is causing the forwarder to not forward the data.  Here's the input.conf for the Splunk app that handles the files: [monitor://C:\Logs\CustomApp] disabled = 0 index = customapp sourcetype = customappevents recursive = false blacklist = \.tmp$ crcSalt = <SOURCE>   Thanks in advance!
Playing around to find a way to gather IP-Addresses from one type of search, to gather other type of information about the ip-addresses. In this example I try to gather all ip-addresses that is in t... See more...
Playing around to find a way to gather IP-Addresses from one type of search, to gather other type of information about the ip-addresses. In this example I try to gather all ip-addresses that is in the source type threat log, this source type does not contain session_end_reason, that is in the traffic log. So, I try to fetch all ip addresses that accsess *sonos.com* and use this to get all session_end_reason for those ip addresses. Mayby it’s not possible, and it’s a proof of concept i try achieving here. index="paloalto" src_ip="*" src_ip="*" [search index="paloalto" url="*sonos.com*" src_ip="*"] | table url src_ip session_end_reason The result from this gives blank session_end_reason If I have this in my first search: index="paloalto" src_ip="*" src_ip="*" session_end_reason="*" All ends up blank. This at the end: | table src_ip session_end_reason Ends up with only ip addresses, and no session_end_reason Is this possible? Best regards  
Hi, I want to find specific strings in all event in order to classify them into two values, like "if there is "A" or "B" or "C" so put it under "OK" value, if not put it in default value "KO". I tr... See more...
Hi, I want to find specific strings in all event in order to classify them into two values, like "if there is "A" or "B" or "C" so put it under "OK" value, if not put it in default value "KO". I tried first this query : | eval field=if(searchmatch("A"),"OK,"KO") But I tried ("A" OR "B") and it does not work.  I don't know how to add other strings in this command.  Can you help me ?