All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I use splunk to collect aws waf log and use below search command to get the top 50 of client ip for http request. now I want to know the allow percentage and block percentage for each top 50 ip, and ... See more...
I use splunk to collect aws waf log and use below search command to get the top 50 of client ip for http request. now I want to know the allow percentage and block percentage for each top 50 ip, and that percentage also can been show in the chart below. how can I modify my search command? index="aws_waf" action=block OR action=allow | spath webaclId | top limit=50 "httpRequest.clientIp"
Want to add two text field in splunk xml dashboard i.e("IST" and "PST"). These text field should contain current IST date Time and Current PST date time respectively. Dashboard Look alike as ment... See more...
Want to add two text field in splunk xml dashboard i.e("IST" and "PST"). These text field should contain current IST date Time and Current PST date time respectively. Dashboard Look alike as mentioned below.    
Hi,   I am struggeling with field extractions. I have two fields that I want to extract. But the problem is sometimes te value is in 'Documentid : 123456789' and sometimes in 'DocumentId 12345678... See more...
Hi,   I am struggeling with field extractions. I have two fields that I want to extract. But the problem is sometimes te value is in 'Documentid : 123456789' and sometimes in 'DocumentId 123456789' so without the :  Is it possible to make an extraction that extracts only the numbers after 'DocumentId' ?
Hello All,   I have query index=xxxx sourcetype=xxx_* NOT(ASA) which actually filters logs that are not ASA from 4 sourcetypes , i want to send these resulted logs to a new sourcetype call xxx_an... See more...
Hello All,   I have query index=xxxx sourcetype=xxx_* NOT(ASA) which actually filters logs that are not ASA from 4 sourcetypes , i want to send these resulted logs to a new sourcetype call xxx_analmoly    Is it possible if yes , how can i achieve this 
Need help with splunk SPL or rest api to fetch areport where we can see the count of total servers(splunk universal forwarders) reporting to IDX and HF with breakup. Some UF are sending data to IDXe... See more...
Need help with splunk SPL or rest api to fetch areport where we can see the count of total servers(splunk universal forwarders) reporting to IDX and HF with breakup. Some UF are sending data to IDXers directly and few of them are sending it via HF(due to some connection issues we have followed this architecture) Please assist me on the same.
Hi, I want to change the default value of the specific ttl values for each action that can be triggered from an alert. Especially I'm asking how to change the default value of the "action.lookup.ttl"... See more...
Hi, I want to change the default value of the specific ttl values for each action that can be triggered from an alert. Especially I'm asking how to change the default value of the "action.lookup.ttl" parameter? I know how to set it for each saved search individually but not how to change the default value.
Hi I've index a 12MB file in splunk but have different between line of file and event of splunk   file = 114,475          lines splunk = 104,475   events   file lines like this: 1234567... See more...
Hi I've index a 12MB file in splunk but have different between line of file and event of splunk   file = 114,475          lines splunk = 104,475   events   file lines like this: 123456789|0123456789|0123456789|Tobe                             |Alex                            |     any idea? Thanks
Hello, We have several alerts which occasionally go in status waiting (correponding jobs) and stay like that. Then the next executions of these alerts are not triggered of course, so we get quite... See more...
Hello, We have several alerts which occasionally go in status waiting (correponding jobs) and stay like that. Then the next executions of these alerts are not triggered of course, so we get quite some skipped jobs. The jobs overview states the jobs are in status "Parsing", however when I copy the corresponding search and execute in another search window it finishes quite fast.  Please see also the screenshot  below. It seems to stuck in the following part (last entries in the search.log:   .... 12-05-2022 06:40:02.915 INFO ChunkedExternProcessor [15318 searchOrchestrator] - Running process: /vol1/opt/splunkdev2/splunk/bin/python3.7 /vol1/opt/splunkdev2/splunk/etc/apps/splunk_app_db_conn       I increased all possible limits and quotas I could come up with to lift any restrictions on the concurrency, but it did not help ...   How would I investigate it further?  
I have a query that returns an avg calculation over time and I am using a sparkline to try to show the results for each 'period' over that time, however although my results are showing a correct valu... See more...
I have a query that returns an avg calculation over time and I am using a sparkline to try to show the results for each 'period' over that time, however although my results are showing a correct value, my sparkline only shows a value of 0 or 1. My search is:       | tstats SUM(ABC) as ABC, sum(DEF) as DEF where index=FOO earliest=-4h latest=-45m by _time platform span=5m | eval AVG_ABC=((sum(DEF)/sum(ABC))/60) | stats sparkline avg(AVG_ABC) by platform         Instead of the single line result with the sparkline over time, I get the following: Can anyone point me in the right direction? Essentially I am looking to create something like a single number value viz with a trendline. Thanks.
Hi all, Recently I've upgraded all splunk deployment tiers (search head, Indexer and Heavy Forwarder) and we are collecting Windows event by Splunk_TA_windows add-on. Before the upgrade, Windows ... See more...
Hi all, Recently I've upgraded all splunk deployment tiers (search head, Indexer and Heavy Forwarder) and we are collecting Windows event by Splunk_TA_windows add-on. Before the upgrade, Windows event fields like EventCode was appearing but after the upgrade only general fields is visible. The Splunk_TA_windows add-on installed on all components of splunk (HF, SH and indexer) Despite not appearing the fields, I can use missing fields like EventCode in search query and commands like top and stats. How can I troubleshoot and resolve the problem? What's wrong? Anybody can help me?
Hello There, I have been trying to secure my Splunk web using TLS certificates. I followed this link: Configure Splunk Web to use TLS certificates - Splunk Documentation. Things to know: I sent... See more...
Hello There, I have been trying to secure my Splunk web using TLS certificates. I followed this link: Configure Splunk Web to use TLS certificates - Splunk Documentation. Things to know: I sent a signing request to a CA. My server certificate file contains only the server certificate and the CA certificate (in this order) My web.conf is the following: [settings] enableSplunkWebSSL = true privKeyPath = ..\mycerts\myServerPrivateKey.key serverCert = ..\mycerts\splunk-web.pem sslPassword = startwebserver = true As a result I can not connect to 127.0.0.1:8000 "This page isn't working right now" and when i restart splunk I get the message "web interface does not seem to be available", plus it takes like 50 min for Splunk to restart. I suspect the fact that I am not including a CA or .csr file, but I am not sure since it's not indicated in the documentation, plus I tried adding the private key and the .csr file but still had the same error. Can you help me to know what I am doing wrong please? any help would be appreciated have a great day!
Hi All, Our Windows servers have the Windows Machine Agent installed. Version -22.9 of the machine agent Microsoft Windows servers, 2016 After installation, we noted the following points. 1) The... See more...
Hi All, Our Windows servers have the Windows Machine Agent installed. Version -22.9 of the machine agent Microsoft Windows servers, 2016 After installation, we noted the following points. 1) The WmiPrvSE process consumes more than 50% of the CPU. 2) The agent is automatically rebooting. 3) The MA is steadily using more memory. Please provide input to help resolve the existing problems.
Hello ! Currently I'm trying to optimize splunk searches left by another colleague which are usually slow or very big. My first thought was to change the "basic searches" (searches that don't use ts... See more...
Hello ! Currently I'm trying to optimize splunk searches left by another colleague which are usually slow or very big. My first thought was to change the "basic searches" (searches that don't use tstats) to searches with tstats to see the most notable accelaration. The needed datamodels are already accelerated and the fields are normalized. bellow is one of those searches I would like to change into tstats.   index=* message_type=query NOT dns_request_queried_domain IN (>different_domainnames>) | lookup1 ip_address as dns_request_client_ip output ip_address as dns_server_ip | search dns_server_ip=no_matches | lookup2 domain as dns_request_queried_domain output domain as cmdb_legit_domain | search cmdb_legit_domain=no_matches | lookup3 domain as dns_request_queried_domain output domain as wl_domain | search wl_domain=no_matches | eval list="custom" | `ut_parse_extended(dns_request_queried_domain,list)` | search NOT ut_domain="None" | lookup4 domain as ut_domain output domain as umbrella_domain | lookup5 domain as ut_domain output domain as majestic_domain | search umbrella_domain=no_matches AND majestic_domain=no_matches | bucket _time span=5s | stats count by _time, ut_domain, dns_request_client_ip | search count>100 | sort -count   now, I struggle to "get" how to connect the way tstats works with the way the basic search works. as far as I've read and seen tstats only works with indexed fields but not fields that are being extracted at search time? so I guess my question is how could I use tstats and still incorporate the above fields and lookups into an optimized search ? I really struggle to understand how to really incorporate tstats in that case. thanks so much for every hint or help André
Hello Experts, In my client environment, we have set of AWS EC2 instances have Splunk agent installed and sending logs to deployment server. But recently I'm facing issue for few newly build UNIX A... See more...
Hello Experts, In my client environment, we have set of AWS EC2 instances have Splunk agent installed and sending logs to deployment server. But recently I'm facing issue for few newly build UNIX AWS EC2 instances are not sending logs to deployment server (Via Unix TA). But its reporting to Deployment server forwarder management. On further troubleshooting found that Unix AWS EC2 instance local system time is on UTC and my Deployment server is on MYT, Will it cause the issue and stop logs onboarding? If, I change/add the particular EC2 instance Splunk_UNIX_TA apps/ props.conf either local or default stanza will resolve the issue? (We have option to change that machine local time settings but, if client does not accept to change time settings what is next?) Any advice? Thanks in advance.
Hi splunkers, I've defined a new role and check all capabilities for that but just access to a specific index. when i search in that index, it doesn't show any results for me. With another user and ... See more...
Hi splunkers, I've defined a new role and check all capabilities for that but just access to a specific index. when i search in that index, it doesn't show any results for me. With another user and another role i can search in that index. Something wired is when i change the user role to for example "user", the search results shown. is there a limit in number of roles can be defined in splunk? How can i troubleshoot these kindes of permissions in splunk logs?
Hi, I am looking for alternative app like WHOIS app(excute a whois lookup on the given domain/given ip) from splunkbase do we have other than this app, it's not compatible in my splunk cloud. Tha... See more...
Hi, I am looking for alternative app like WHOIS app(excute a whois lookup on the given domain/given ip) from splunkbase do we have other than this app, it's not compatible in my splunk cloud. Thanks.
I would like to inquire if there is a way we can transform our html data into tabular data in Splunk once indexed? I am using the Jira Confluence get content and retrieves html data (confluence page)... See more...
I would like to inquire if there is a way we can transform our html data into tabular data in Splunk once indexed? I am using the Jira Confluence get content and retrieves html data (confluence page). We would like to index this data in Splunk using AddOn Input (configured in Splunk AddOn Builder Python Code) and uses BeautifulSoup library for python for parsing. Also, I believe transform.conf and props.conf will help to format our data indexed. However, these setup seems difficult for there are wide formatting needed since not all pages were same and to submit it in Splunk. (eg. Create a Splunk dashboard that will get each confluence page information like table data, header data, etc.)
I updated Splunk to 9.0.2 from 9.0.0 and in on of my panels I have changed lookup from kvstore lookup to general csv lookup -> from "allfindings" to "allfindings.csv". Just after this I started getti... See more...
I updated Splunk to 9.0.2 from 9.0.0 and in on of my panels I have changed lookup from kvstore lookup to general csv lookup -> from "allfindings" to "allfindings.csv". Just after this I started getting the following error.   I tried to inspect this error and found this in my console. Trying to resolve this issue but noting is working.
Hello Splunk Users, Splunk Add-On for Amazon Security Lake is a brand new integration with the Amazon Security Lake preview. If you have tried out this new integration we would love to hear your que... See more...
Hello Splunk Users, Splunk Add-On for Amazon Security Lake is a brand new integration with the Amazon Security Lake preview. If you have tried out this new integration we would love to hear your questions and feedback. Did you have any challenges setting up the integration? Is the functionality it provides useful enough for your team to adopt? Why or why not? Are there any capabilities you would like added to the integration? Etc. In addition to providing feedback here, we have a survey if you have time. https://forms.gle/vpkFrPMpXx23pnae8 https://classic.splunkbase.splunk.com/app/6684/ Thank you, Splunk GDI Team
Hello All, A dashboard in SimpleXML has a wonderful option to show or hide a "panel" using a token and a "depends" setting for a given panel. It works. I love it. BUT... How can I show or hide ... See more...
Hello All, A dashboard in SimpleXML has a wonderful option to show or hide a "panel" using a token and a "depends" setting for a given panel. It works. I love it. BUT... How can I show or hide a table in Splunk Dashboard Studio? I have seen nothing on this. I do notice that a table has an option to "move to front" or "send to back." Is there a way to do this in the json code with a token?? I want to have two tables, one runs on a realtime search, the other uses the time picker - with the global_time token. I want to hide the realtime table when user clicks the time picker. Can this be done using Dashboard Studio? Thanks, eholz1