All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, Anyone here tried to turn off the option for Export PDF in certain dashboards or in all dashboards?  
I want to be able to drilldown on a field if the value is an IP address.  If it is not an IP address it will be some string value of "N/A" or something similar and the field will not clickable some o... See more...
I want to be able to drilldown on a field if the value is an IP address.  If it is not an IP address it will be some string value of "N/A" or something similar and the field will not clickable some other way to handle the drilldown function for fields with that value.
I have all_ip filed that contains all my ips. now I want to split it to public ip and private ip: public_ip, private_ip, all_ip: and when private_ip is null I want to put the value from all_ip in ... See more...
I have all_ip filed that contains all my ips. now I want to split it to public ip and private ip: public_ip, private_ip, all_ip: and when private_ip is null I want to put the value from all_ip in public_ip field.  first I did:  | eval private_ip=if(like(all_ip,"XXXX.%") OR like(all_ip,"XXX.%"),all_ip,null()) and now I need to do (all the rest fill it in public_ip field.  this is possible?  thanks!
Hi everyone, I am searching a way to have a list of every alert (sending email) goes along with: schedule (cron), last run,  send email (sent or not) Until now I can find this list of info but stil... See more...
Hi everyone, I am searching a way to have a list of every alert (sending email) goes along with: schedule (cron), last run,  send email (sent or not) Until now I can find this list of info but still not success to have the last run and send email     |rest/servicesNS/-/App_name/saved/searches | fields title disabled actions alert.severity cron_schedule action.email.to action.email.bcc is_schedule max_concurrent next_schedule_time run_n_times | where disabled=0 |where actions="email" |table title cron_schedule action.email.to action.email.bcc is_schedule max_concurrent next_schedule_time run_n_times     Anyone has an idea, please?  Thanks in advanced!
Please help me to show the timings on below barchart, i am using chart count over by description to view the file name on graph when i point the mouse pointer to chart, but i couldn't able to provide... See more...
Please help me to show the timings on below barchart, i am using chart count over by description to view the file name on graph when i point the mouse pointer to chart, but i couldn't able to provide the timings on xaxis , below are the query and graph   index=xxxxx sourcetype = xxxx source="xxxxxx_*.log" |eval description=case(Suspend Like "S","Suspended",Suspend Like "P","Partially-Completed",Suspend Like "C","Completed")|eval File_Name= description."-".TC_File_Name|table _time File_Name TC_File_Name description |chart count(File_Name) over TC_File_Name by description  
I am using Splunk 8.0.8. I have python versions 2.7 and 3.7 installed in $Splunk_Home/bin folder but all my python scripts are getting executed with python 2.7. I even tried changing python.version=p... See more...
I am using Splunk 8.0.8. I have python versions 2.7 and 3.7 installed in $Splunk_Home/bin folder but all my python scripts are getting executed with python 2.7. I even tried changing python.version=python3 in server.conf under ./etc/system/local but still scripts are running using python 2.7. I even tried python.version=forced_python3 in server.conf but no luck. Can someone please let me know where I need to change python version so that all my scripts starts using python3.7. ?
I see that in our environment some of our Search Heads are setup as forwarders and some are not, I think this environment like most grew from one server to a multiple server environment all before my... See more...
I see that in our environment some of our Search Heads are setup as forwarders and some are not, I think this environment like most grew from one server to a multiple server environment all before my time Now we have Search Heads and dedicated Deployment servers aka Forwarders which leads me to believe we no loner need the Search Heads to forward anything, so is there a way I can see what is using the Search Heads as forwarders?
I have a search: (index=.... sourcetype=....| stats count(transaction) as "Transaction") How ever when I use this search for ITSI my result in KPIs is: Anyone know why and how to fix th... See more...
I have a search: (index=.... sourcetype=....| stats count(transaction) as "Transaction") How ever when I use this search for ITSI my result in KPIs is: Anyone know why and how to fix this Thank you for your help.
11-12-2023 21:20:03.288 +0900 ERROR CacheManager [3953110 TcpChannelThread] - Failed to check receipt for cache_id=""dma|ioapratraffic~434~9DF98E46-8A38-48F5-9EFB-90D0467F1463|89513704-8894-4CFC-AC58... See more...
11-12-2023 21:20:03.288 +0900 ERROR CacheManager [3953110 TcpChannelThread] - Failed to check receipt for cache_id=""dma|ioapratraffic~434~9DF98E46-8A38-48F5-9EFB-90D0467F1463|89513704-8894-4CFC-AC58-9BF7D36B3B59_DM_Splunk_SA_CIM_Compute_Inventory"" err=Service Unavailable Before upgrading, the following error were frequently output from the indexer, but after upgrading, they were still output, but the number of error has drastically decreased.    version upgarded from 8.2.1 to 8.2.7
after Splunk version upgrade (some time ago, I'm sure) there is a new directory on the Index Cluster Master called manager-apps but the old one called master-apps is still there as well  I know wh... See more...
after Splunk version upgrade (some time ago, I'm sure) there is a new directory on the Index Cluster Master called manager-apps but the old one called master-apps is still there as well  I know why Splunk did this, the question is how are things handled moving forward? all of my old apps are still under master-apps, does this mean they will stay there and if I create any new cluster apps I need to create them under manager-apps now? In other words it appears Splunk did not just rename the old directory or move the apps to the new one automagically?
I am trying to get percentage value fields for multiple fields by time, and fields are dynamic. How can I calculate?  search | eval Duration=tostring(round(TimeDiff1), "duration") | chart count o... See more...
I am trying to get percentage value fields for multiple fields by time, and fields are dynamic. How can I calculate?  search | eval Duration=tostring(round(TimeDiff1), "duration") | chart count over TimeDiff1 by MaterialNumber | chart sum(*) as * by TimeDiff1 span=300  my result is: TimeDiff1 KM50115007V002 KM51585489V000 KM51585490V000 KM51585494V000 0-300 24 0 2 0 300-600 0 1 0 0 600-900 0 7 0 1 900-1200 0 0 0 0 1200-1500 0 0 0 4 1500-1800 0 0 0 0 1800-2100 0 0 0 0 2100-2400 0 0 0 1   But, I want result in below format.  TimeDiff1 KM50115007V002 KM51585489V000 KM51585490V000 KM51585494V000 perc(KM50115007V002) perc(KM51585489V000) perc(KM51585490V000) perc(KM51585494V000) 0-300 24 0 2 0 100 0 100 0 300-600 0 1 0 0 0 12.5 0 0 600-900 0 7 0 1 0 87.5 0 16.66666667 900-1200 0 0 0 0 0 0 0 0 1200-1500 0 0 0 4 0 0 0 66.66666667 1500-1800 0 0 0 0 0 0 0 0 1800-2100 0 0 0 0 0 0 0 0 2100-2400 0 0 0 1 0 0 0 16.66666667  
I have three Splunk ver 9.0.0 on Windows Server 2019 environments completely isolated and I like to run the Web interface even on my Indexers so I have it running just fine on 6 clustered Indexers in... See more...
I have three Splunk ver 9.0.0 on Windows Server 2019 environments completely isolated and I like to run the Web interface even on my Indexers so I have it running just fine on 6 clustered Indexers in Production where it runs as https with our own certificates I also have it running on http in my home lab also on 3 Indexers in a cluster  but when I try to run it in our lab on 3 clustered Indexers for the life of me it won't start, I have restered Splunk 900 times and I even tried the Splunk Start Splunkweb and it never does these checks:  Waiting for web server at https://127.0.0.1:8000 to be available.   nor give me the message that the web is running: The Splunk web interface is at http://MyIndexServer01:8000
My requirement is to fire an action from Appdynamics using ansible to restart the the AppServer. How we can integrate Appdynamics to Ansible and What configuration is required at ansible side , ple... See more...
My requirement is to fire an action from Appdynamics using ansible to restart the the AppServer. How we can integrate Appdynamics to Ansible and What configuration is required at ansible side , please share.
Hello, I was using a search and getting an error message stated in the subject. I have tried moving the tstats around and editing some of the commands but either run into the same error or tsidx er... See more...
Hello, I was using a search and getting an error message stated in the subject. I have tried moving the tstats around and editing some of the commands but either run into the same error or tsidx error. Here is the search as follows.   index=netsec_index sourcetype=pan* OR sourctype=fgt* user=saic-corp\\heathl misc=* OR url=* earliest=-4d| eval Domain=coalesce(misc, url) | eval domain=misc + "," + url | makemv delim="," domain | fields _time action category rule session_end_reason http_category vendor_action url misc domain Domain | table _time action category rule session_end_reason http_category vendor_action url misc domain Domain | stats count by domain `comment("Search for High Volume of Packets in/out (Show Megabytes/Gigabytes) back by earliest=-1d. Exclude app=ipsec.")` | tstats summariesonly=true count from datamodel=Network_Traffic where All_Traffic.action=allowed AND NOT All_Traffic.app=ipsec-esp-udp earliest=-1d by All_Traffic.src_ip All_Traffic.dest_ip All_Traffic.app All_Traffic.packets_in All_Traffic.packets_out All_Traffic.bytes All_Traffic.bytes_in All_Traffic.bytes_out All_Traffic.action All_Traffic.rule All_Traffic.user | rename All_Traffic.* as * | sort - bytes_out | eval Megabytes_out=round(bytes_out/1024/1024,2) `comment("Math for bytes > Megabytes")` | eval Megabytes_in=round(bytes_in/1024/1024,2) `comment("Math for bytes > Megabytes")` | eval Gigabytes_out=round(bytes_out/1024/1024/1024,2) `comment("Math for bytes > Gigabytes")` | eval Gigabytes_in=round(bytes_in/1024/1024/1024,2) `comment("Math for bytes > Gigabytes")` | eval packets_in=tostring(packets_in, "commas") | eval packets_out=tostring(packets_out, "commas") | eval bytes=tostring(bytes, "commas") | eval bytes_in=tostring(bytes_in, "commas") | eval bytes_out=tostring(bytes_out, "commas") | fields - count | head 100 If any guidance can be provided I would be appreciate it. Thank you.
Hi ,   could you please let me know how to change the structure of table in above format.   
I'm trying to blacklist the event code 4634 when user_type = computer.  I'm using the below blacklist in my inputs.conf file and it doesn't seem to work.  When I remove user_type="computer", it does ... See more...
I'm trying to blacklist the event code 4634 when user_type = computer.  I'm using the below blacklist in my inputs.conf file and it doesn't seem to work.  When I remove user_type="computer", it does properly filter out the event code 4634, but it doesn't work when I try the combination of the two.  What am I doing wrong or is there a different way to accomplish this?   blacklist4 = EventCode="4634" user_type="computer"
Hi guys, Can you please help me , I am trying to create a query in which it shows if a user is in  a different location in the sameday it will only prioritize one of it. please see below | conver... See more...
Hi guys, Can you please help me , I am trying to create a query in which it shows if a user is in  a different location in the sameday it will only prioritize one of it. please see below | convert timeformat="%F %H:%M" ctime(zone) as ctime | stats count by user fullname country ctime location | rename fullname as "Name", ctime as DateStamp, location as "Location", user as "NetworkID", country as "Country" | fields - count | sort 0 NetworkID This is what i am getting if I'm using the query above NetworkID Name Country DateStamp Location userA A Sample Spain 12-26-2022 Office userA A Sample Spain 12-27-2022 Office userA A Sample Spain 12-27-2022 Home   and this is what I am trying to get that If it's in the same day it will only Select the office NetworkID Name Country DateStamp Location userA A Sample Spain 12-26-2022 Office userA A Sample Spain 12-27-2022 Office Thank you in advance
Recently I upgraded splunk enterprise to 9.0.2 version. After few days, Index queue fill ratio is 100% and indexing rate is 0. I increased max queue size to 100MB, but still there is a bottleneck o... See more...
Recently I upgraded splunk enterprise to 9.0.2 version. After few days, Index queue fill ratio is 100% and indexing rate is 0. I increased max queue size to 100MB, but still there is a bottleneck on Index Queue. I think Indexers are too late to write data to disk. How to change indexer speed or size to write disk? Please help.
Is there any way to join 2 metrics in plot editor and create a chart or Table. Eg. Plot A have "otelcol_process_cpu_seconds" and Plot B "have cpu.utilization".  I need to create Plot C list data ... See more...
Is there any way to join 2 metrics in plot editor and create a chart or Table. Eg. Plot A have "otelcol_process_cpu_seconds" and Plot B "have cpu.utilization".  I need to create Plot C list data exist in B (cpu.untilization) but not exist in plot A(otelcol_process_cpu_seconds").  I tried with formula, but formula works only for math operations like B-A (single value) but not working for showing the listing data, I need to plot table shows host name not in "otelcol_process_cpu_seconds".
Hi all, I'm trying to install(tar.gz file) an app available on GitLab. I am using 'apps/local' endpoint to install it.     curl -k -u user:pass -X POST https://localhost:8089/services/apps/lo... See more...
Hi all, I'm trying to install(tar.gz file) an app available on GitLab. I am using 'apps/local' endpoint to install it.     curl -k -u user:pass -X POST https://localhost:8089/services/apps/local -d path=https://gitlab.com/xxxxx/yyyyy/internal_app-1.0.0.tar.gz -d update=1     But it gives an error like below.     splunklib.binding.HTTPError: HTTP 500 Internal Server Error -- Unexpected error downloading update: error:14090086:SSL routines:ssl3_get_server_certificate:certificate verify failed - please check the output of the `openssl verify` command for the certificates involved; note that if certificate verification is enabled (requireClientCert or sslVerifyServerCert set to "true"), the CA certificate and the server certificate should not have the same Common Name.     Does anyone have an idea how can i solve it? Btw, i do not want to disable ssl-tls verification if possible.