All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi all, Recently I've upgraded all splunk deployment tiers (search head, Indexer and Heavy Forwarder) and we are collecting Windows event by Splunk_TA_windows add-on. Before the upgrade, Windows ... See more...
Hi all, Recently I've upgraded all splunk deployment tiers (search head, Indexer and Heavy Forwarder) and we are collecting Windows event by Splunk_TA_windows add-on. Before the upgrade, Windows event fields like EventCode was appearing but after the upgrade only general fields is visible. The Splunk_TA_windows add-on installed on all components of splunk (HF, SH and indexer) Despite not appearing the fields, I can use missing fields like EventCode in search query and commands like top and stats. How can I troubleshoot and resolve the problem? What's wrong? Anybody can help me?
Hello There, I have been trying to secure my Splunk web using TLS certificates. I followed this link: Configure Splunk Web to use TLS certificates - Splunk Documentation. Things to know: I sent... See more...
Hello There, I have been trying to secure my Splunk web using TLS certificates. I followed this link: Configure Splunk Web to use TLS certificates - Splunk Documentation. Things to know: I sent a signing request to a CA. My server certificate file contains only the server certificate and the CA certificate (in this order) My web.conf is the following: [settings] enableSplunkWebSSL = true privKeyPath = ..\mycerts\myServerPrivateKey.key serverCert = ..\mycerts\splunk-web.pem sslPassword = startwebserver = true As a result I can not connect to 127.0.0.1:8000 "This page isn't working right now" and when i restart splunk I get the message "web interface does not seem to be available", plus it takes like 50 min for Splunk to restart. I suspect the fact that I am not including a CA or .csr file, but I am not sure since it's not indicated in the documentation, plus I tried adding the private key and the .csr file but still had the same error. Can you help me to know what I am doing wrong please? any help would be appreciated have a great day!
Hi All, Our Windows servers have the Windows Machine Agent installed. Version -22.9 of the machine agent Microsoft Windows servers, 2016 After installation, we noted the following points. 1) The... See more...
Hi All, Our Windows servers have the Windows Machine Agent installed. Version -22.9 of the machine agent Microsoft Windows servers, 2016 After installation, we noted the following points. 1) The WmiPrvSE process consumes more than 50% of the CPU. 2) The agent is automatically rebooting. 3) The MA is steadily using more memory. Please provide input to help resolve the existing problems.
Hello ! Currently I'm trying to optimize splunk searches left by another colleague which are usually slow or very big. My first thought was to change the "basic searches" (searches that don't use ts... See more...
Hello ! Currently I'm trying to optimize splunk searches left by another colleague which are usually slow or very big. My first thought was to change the "basic searches" (searches that don't use tstats) to searches with tstats to see the most notable accelaration. The needed datamodels are already accelerated and the fields are normalized. bellow is one of those searches I would like to change into tstats.   index=* message_type=query NOT dns_request_queried_domain IN (>different_domainnames>) | lookup1 ip_address as dns_request_client_ip output ip_address as dns_server_ip | search dns_server_ip=no_matches | lookup2 domain as dns_request_queried_domain output domain as cmdb_legit_domain | search cmdb_legit_domain=no_matches | lookup3 domain as dns_request_queried_domain output domain as wl_domain | search wl_domain=no_matches | eval list="custom" | `ut_parse_extended(dns_request_queried_domain,list)` | search NOT ut_domain="None" | lookup4 domain as ut_domain output domain as umbrella_domain | lookup5 domain as ut_domain output domain as majestic_domain | search umbrella_domain=no_matches AND majestic_domain=no_matches | bucket _time span=5s | stats count by _time, ut_domain, dns_request_client_ip | search count>100 | sort -count   now, I struggle to "get" how to connect the way tstats works with the way the basic search works. as far as I've read and seen tstats only works with indexed fields but not fields that are being extracted at search time? so I guess my question is how could I use tstats and still incorporate the above fields and lookups into an optimized search ? I really struggle to understand how to really incorporate tstats in that case. thanks so much for every hint or help André
Hello Experts, In my client environment, we have set of AWS EC2 instances have Splunk agent installed and sending logs to deployment server. But recently I'm facing issue for few newly build UNIX A... See more...
Hello Experts, In my client environment, we have set of AWS EC2 instances have Splunk agent installed and sending logs to deployment server. But recently I'm facing issue for few newly build UNIX AWS EC2 instances are not sending logs to deployment server (Via Unix TA). But its reporting to Deployment server forwarder management. On further troubleshooting found that Unix AWS EC2 instance local system time is on UTC and my Deployment server is on MYT, Will it cause the issue and stop logs onboarding? If, I change/add the particular EC2 instance Splunk_UNIX_TA apps/ props.conf either local or default stanza will resolve the issue? (We have option to change that machine local time settings but, if client does not accept to change time settings what is next?) Any advice? Thanks in advance.
Hi splunkers, I've defined a new role and check all capabilities for that but just access to a specific index. when i search in that index, it doesn't show any results for me. With another user and ... See more...
Hi splunkers, I've defined a new role and check all capabilities for that but just access to a specific index. when i search in that index, it doesn't show any results for me. With another user and another role i can search in that index. Something wired is when i change the user role to for example "user", the search results shown. is there a limit in number of roles can be defined in splunk? How can i troubleshoot these kindes of permissions in splunk logs?
Hi, I am looking for alternative app like WHOIS app(excute a whois lookup on the given domain/given ip) from splunkbase do we have other than this app, it's not compatible in my splunk cloud. Tha... See more...
Hi, I am looking for alternative app like WHOIS app(excute a whois lookup on the given domain/given ip) from splunkbase do we have other than this app, it's not compatible in my splunk cloud. Thanks.
I would like to inquire if there is a way we can transform our html data into tabular data in Splunk once indexed? I am using the Jira Confluence get content and retrieves html data (confluence page)... See more...
I would like to inquire if there is a way we can transform our html data into tabular data in Splunk once indexed? I am using the Jira Confluence get content and retrieves html data (confluence page). We would like to index this data in Splunk using AddOn Input (configured in Splunk AddOn Builder Python Code) and uses BeautifulSoup library for python for parsing. Also, I believe transform.conf and props.conf will help to format our data indexed. However, these setup seems difficult for there are wide formatting needed since not all pages were same and to submit it in Splunk. (eg. Create a Splunk dashboard that will get each confluence page information like table data, header data, etc.)
I updated Splunk to 9.0.2 from 9.0.0 and in on of my panels I have changed lookup from kvstore lookup to general csv lookup -> from "allfindings" to "allfindings.csv". Just after this I started getti... See more...
I updated Splunk to 9.0.2 from 9.0.0 and in on of my panels I have changed lookup from kvstore lookup to general csv lookup -> from "allfindings" to "allfindings.csv". Just after this I started getting the following error.   I tried to inspect this error and found this in my console. Trying to resolve this issue but noting is working.
Hello Splunk Users, Splunk Add-On for Amazon Security Lake is a brand new integration with the Amazon Security Lake preview. If you have tried out this new integration we would love to hear your que... See more...
Hello Splunk Users, Splunk Add-On for Amazon Security Lake is a brand new integration with the Amazon Security Lake preview. If you have tried out this new integration we would love to hear your questions and feedback. Did you have any challenges setting up the integration? Is the functionality it provides useful enough for your team to adopt? Why or why not? Are there any capabilities you would like added to the integration? Etc. In addition to providing feedback here, we have a survey if you have time. https://forms.gle/vpkFrPMpXx23pnae8 https://classic.splunkbase.splunk.com/app/6684/ Thank you, Splunk GDI Team
Hello All, A dashboard in SimpleXML has a wonderful option to show or hide a "panel" using a token and a "depends" setting for a given panel. It works. I love it. BUT... How can I show or hide ... See more...
Hello All, A dashboard in SimpleXML has a wonderful option to show or hide a "panel" using a token and a "depends" setting for a given panel. It works. I love it. BUT... How can I show or hide a table in Splunk Dashboard Studio? I have seen nothing on this. I do notice that a table has an option to "move to front" or "send to back." Is there a way to do this in the json code with a token?? I want to have two tables, one runs on a realtime search, the other uses the time picker - with the global_time token. I want to hide the realtime table when user clicks the time picker. Can this be done using Dashboard Studio? Thanks, eholz1
Hi Splunk experts - I have an unusual math problem on my hands and I'm not sure how to deal with it. We are trying to prove how many tickets have been completed, so we are only counting the numbers t... See more...
Hi Splunk experts - I have an unusual math problem on my hands and I'm not sure how to deal with it. We are trying to prove how many tickets have been completed, so we are only counting the numbers that show improvement, not the numbers that show the addition of more tickets (following me?). Here's the data: report_date total 2022-11-07 4111 2022-11-08 3764 2022-11-09 3562 2022-11-10 3633 2022-11-11 3694 2022-11-14 7506 2022-11-15 12987 2022-11-16 15159 2022-11-17 14851 2022-11-18 14410 2022-11-21 6674 2022-11-22 5793 2022-11-23 5601 What I am trying to do is determine the difference between the "total" fields, but only when the count goes down. So for example, 11/7 - 11/9 show counts going down (4111-3562=549). But the numbers go up on 11/10, so we don't want to count those. And then the numbers go down again on 11/17, so I would add the difference between 11/16 and 11/17 to the previous 549. I feel like I am making this more complicated that it needs to be. Help.
Hi All, My query: index=abt_htt_app host=thyfg OR host=jhbjj OR host=nmm sourcetype=app:abt:logs |stats count as Transactions |where Transaction>10 |appendcols [ index=tbt_htt_app host=juhy OR ho... See more...
Hi All, My query: index=abt_htt_app host=thyfg OR host=jhbjj OR host=nmm sourcetype=app:abt:logs |stats count as Transactions |where Transaction>10 |appendcols [ index=tbt_htt_app host=juhy OR host=kuthf OR host=nmm sourcetype=app:abt:logs |stats count as Sucess |where Sucess>5] |appendcols [ index=ccc_htt_app sourcetype=app:abt:even |stats count as failed |where falied>10] |appendcols [ index=tbt_htt_app host=juhy OR host=kuthf OR host=nmm sourcetype=app:clt:logs |stats count as error |where error>45] Output: Transactions Sucess failed error 12 5 4 10 but when the count condition does not met all the fileds wont get dsiplayed and when i get only transactions count in table Here i want to add a customized text like "No action required" under the table as shown below: how can i do this?? Output: Transactions 12 "No action required"
Greetings, everyone. I apologize if this question has been answered before, but I really have a requirement to get a deeper understanding on how to proceed with this. We currently have 2 Splunk Ent... See more...
Greetings, everyone. I apologize if this question has been answered before, but I really have a requirement to get a deeper understanding on how to proceed with this. We currently have 2 Splunk Enterprise indexer clusters, one of them is our prod infrastructure spanned across two geo-separated datacenters, with 8 nodes total, 4 in each geo-site. We also have nonprod, which is a very similar setup, but only one physical site, with 4 nodes making up the cluster. We have recently been asked to assist in migrating these clusters to brand new physical servers and have questions on the best way to proceed. First, we have local SSD storage arrays on our current physical hosts (hot tier), and our "colddb" is located on a chunk of SAN storage, connected by Fiber-channel. This is where the wrinkle is. We are not getting new SAN storage for "colddb", so we will not be able to stand these new servers up and add them to the cluster as 9th nodes, let it replicate, then remove the one it replaces, getting us back to 8, repeating for all nodes. Instead, we will have to remove the SAN allocation from the old nodes and attach to the new nodes making this type of migration impossible. My initial assumption is that instead, we will need to decom a node, and replace with a new node, one at a time, as if a node failedAm I correct in this assumption? Is there a better way to handle this, or am I stuck with the current situation? Thanks for your time.
I have two indexes: IndexA has a `thisId` field. IndexB has fields `otherId` and `name`. I want to write a query which returns a table of all `thisIds` with a matching `name`. The challenge has... See more...
I have two indexes: IndexA has a `thisId` field. IndexB has fields `otherId` and `name`. I want to write a query which returns a table of all `thisIds` with a matching `name`. The challenge has been writing the query such that it doesn't return all the `otherId` fields as well. My current query is: (index="indexA") OR (index="indexB") | eval id=coalesce(thisId, otherId) | stats values(name) as name by id However, this is returning all the ids in indexB as well. Thank you
Hi Experts, Can we monitoring Azure application with AppD ? If yes what are options or offerings available with in AppD. Your inputs are greatly appreciated. Thanks in Advance. Regards, MSK
Hi, I'm researching the Splunk Enterprise Environment and as of now I'm on "Architecture Optimization". I had a quick question for version 9.0.2 and that is how and what is the recommended Ulimit i... See more...
Hi, I'm researching the Splunk Enterprise Environment and as of now I'm on "Architecture Optimization". I had a quick question for version 9.0.2 and that is how and what is the recommended Ulimit increase on Linux for optimization purposes? Regards,
Hi Friends, My current query: index = pg_idx_whse_prod_events host="*" sourcetype= PG_ST_PROBE_DATA source="/opt/redprairie/prod/prodwms/les/data/csv_probe_data/com.redprairie.mad/JVM-Garbage-Col... See more...
Hi Friends, My current query: index = pg_idx_whse_prod_events host="*" sourcetype= PG_ST_PROBE_DATA source="/opt/redprairie/prod/prodwms/les/data/csv_probe_data/com.redprairie.mad/JVM-Garbage-Collectors__PS MarkSweep__collection-count.csv" | stats latest(value) as "Marksweep collection Count" by host |join type=left max=0 host [search index = pg_idx_whse_prod_events host="*" sourcetype= PG_ST_PROBE_DATA source="/opt/redprairie/prod/prodwms/les/data/csv_probe_data/com.redprairie.mad/JVM-Garbage-Collectors__PS MarkSweep__total-time-ms.csv" | stats latest(value) as "Marksweep total time ms" by host] |join type=left max=0 host [search index = pg_idx_whse_prod_events host="*" sourcetype= PG_ST_PROBE_DATA source="/opt/redprairie/prod/prodwms/les/data/csv_probe_data/com.redprairie.mad/JVM-Garbage-Collectors__PS Scavenge__collection-count.csv" | stats latest(value) as "Scavenge collection count" by host] |join type=left max=0 host [search index = pg_idx_whse_prod_events host="*" sourcetype= PG_ST_PROBE_DATA source="/opt/redprairie/prod/prodwms/les/data/csv_probe_data/com.redprairie.mad/JVM-Garbage-Collectors__PS Scavenge__total-time-ms.csv" | stats latest(value) as "Scavenge total time ms " by host] I want to use Same index, same sourcetype, same field name but different source. I want each source field value corresponding to host. Instead of use Join in the above command kindly suggest alternate SPL to achieve this result. @gcusello
Hello Champs I've index data table change records errors B221205A 109 0 B221205B 1480 0 B221205C 3336 0 B221205D 2581 8 I also have lookup table ... See more...
Hello Champs I've index data table change records errors B221205A 109 0 B221205B 1480 0 B221205C 3336 0 B221205D 2581 8 I also have lookup table that contains File_name Remote_file records $APPLXYZ.C221205A /APPLABC/B123/OUT/C221205A 109 $APPLXYZ.C221205D /APPLABC/B123/OUT/C221205D 2581 $APPLXYZ.C221205C /APPLABC/B123/OUT/C221205C 3336 /APPLABC/B123 /APPLABC/B123/OUT/C221205B 1480 I am looking for the result File_name Remote_file records change errors $APPLXYZ.C221205A /APPLABC/B123/OUT/C221205A 109 B221205A 0 $APPLXYZ.C221205D /APPLABC/B123/OUT/C221205D 2581 B221205B 0 $APPLXYZ.C221205C /APPLABC/B123/OUT/C221205C 3336 B221205C 0 /APPLABC/B123 /APPLABC/B123/OUT/C221205B 1480 B221205D 8
Hi Need to send alert like machine investigate something and after that send alert. I mean something like gptchat talk, instead of sending tables and numbers to users. (like bot) Like this: it se... See more...
Hi Need to send alert like machine investigate something and after that send alert. I mean something like gptchat talk, instead of sending tables and numbers to users. (like bot) Like this: it seems you have issue on machine X because rate of response decreased last night at 20:00. Any idea? Thanks