All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Splunk experts - I have an unusual math problem on my hands and I'm not sure how to deal with it. We are trying to prove how many tickets have been completed, so we are only counting the numbers t... See more...
Hi Splunk experts - I have an unusual math problem on my hands and I'm not sure how to deal with it. We are trying to prove how many tickets have been completed, so we are only counting the numbers that show improvement, not the numbers that show the addition of more tickets (following me?). Here's the data: report_date total 2022-11-07 4111 2022-11-08 3764 2022-11-09 3562 2022-11-10 3633 2022-11-11 3694 2022-11-14 7506 2022-11-15 12987 2022-11-16 15159 2022-11-17 14851 2022-11-18 14410 2022-11-21 6674 2022-11-22 5793 2022-11-23 5601 What I am trying to do is determine the difference between the "total" fields, but only when the count goes down. So for example, 11/7 - 11/9 show counts going down (4111-3562=549). But the numbers go up on 11/10, so we don't want to count those. And then the numbers go down again on 11/17, so I would add the difference between 11/16 and 11/17 to the previous 549. I feel like I am making this more complicated that it needs to be. Help.
Hi All, My query: index=abt_htt_app host=thyfg OR host=jhbjj OR host=nmm sourcetype=app:abt:logs |stats count as Transactions |where Transaction>10 |appendcols [ index=tbt_htt_app host=juhy OR ho... See more...
Hi All, My query: index=abt_htt_app host=thyfg OR host=jhbjj OR host=nmm sourcetype=app:abt:logs |stats count as Transactions |where Transaction>10 |appendcols [ index=tbt_htt_app host=juhy OR host=kuthf OR host=nmm sourcetype=app:abt:logs |stats count as Sucess |where Sucess>5] |appendcols [ index=ccc_htt_app sourcetype=app:abt:even |stats count as failed |where falied>10] |appendcols [ index=tbt_htt_app host=juhy OR host=kuthf OR host=nmm sourcetype=app:clt:logs |stats count as error |where error>45] Output: Transactions Sucess failed error 12 5 4 10 but when the count condition does not met all the fileds wont get dsiplayed and when i get only transactions count in table Here i want to add a customized text like "No action required" under the table as shown below: how can i do this?? Output: Transactions 12 "No action required"
Greetings, everyone. I apologize if this question has been answered before, but I really have a requirement to get a deeper understanding on how to proceed with this. We currently have 2 Splunk Ent... See more...
Greetings, everyone. I apologize if this question has been answered before, but I really have a requirement to get a deeper understanding on how to proceed with this. We currently have 2 Splunk Enterprise indexer clusters, one of them is our prod infrastructure spanned across two geo-separated datacenters, with 8 nodes total, 4 in each geo-site. We also have nonprod, which is a very similar setup, but only one physical site, with 4 nodes making up the cluster. We have recently been asked to assist in migrating these clusters to brand new physical servers and have questions on the best way to proceed. First, we have local SSD storage arrays on our current physical hosts (hot tier), and our "colddb" is located on a chunk of SAN storage, connected by Fiber-channel. This is where the wrinkle is. We are not getting new SAN storage for "colddb", so we will not be able to stand these new servers up and add them to the cluster as 9th nodes, let it replicate, then remove the one it replaces, getting us back to 8, repeating for all nodes. Instead, we will have to remove the SAN allocation from the old nodes and attach to the new nodes making this type of migration impossible. My initial assumption is that instead, we will need to decom a node, and replace with a new node, one at a time, as if a node failedAm I correct in this assumption? Is there a better way to handle this, or am I stuck with the current situation? Thanks for your time.
I have two indexes: IndexA has a `thisId` field. IndexB has fields `otherId` and `name`. I want to write a query which returns a table of all `thisIds` with a matching `name`. The challenge has... See more...
I have two indexes: IndexA has a `thisId` field. IndexB has fields `otherId` and `name`. I want to write a query which returns a table of all `thisIds` with a matching `name`. The challenge has been writing the query such that it doesn't return all the `otherId` fields as well. My current query is: (index="indexA") OR (index="indexB") | eval id=coalesce(thisId, otherId) | stats values(name) as name by id However, this is returning all the ids in indexB as well. Thank you
Hi Experts, Can we monitoring Azure application with AppD ? If yes what are options or offerings available with in AppD. Your inputs are greatly appreciated. Thanks in Advance. Regards, MSK
Hi, I'm researching the Splunk Enterprise Environment and as of now I'm on "Architecture Optimization". I had a quick question for version 9.0.2 and that is how and what is the recommended Ulimit i... See more...
Hi, I'm researching the Splunk Enterprise Environment and as of now I'm on "Architecture Optimization". I had a quick question for version 9.0.2 and that is how and what is the recommended Ulimit increase on Linux for optimization purposes? Regards,
Hi Friends, My current query: index = pg_idx_whse_prod_events host="*" sourcetype= PG_ST_PROBE_DATA source="/opt/redprairie/prod/prodwms/les/data/csv_probe_data/com.redprairie.mad/JVM-Garbage-Col... See more...
Hi Friends, My current query: index = pg_idx_whse_prod_events host="*" sourcetype= PG_ST_PROBE_DATA source="/opt/redprairie/prod/prodwms/les/data/csv_probe_data/com.redprairie.mad/JVM-Garbage-Collectors__PS MarkSweep__collection-count.csv" | stats latest(value) as "Marksweep collection Count" by host |join type=left max=0 host [search index = pg_idx_whse_prod_events host="*" sourcetype= PG_ST_PROBE_DATA source="/opt/redprairie/prod/prodwms/les/data/csv_probe_data/com.redprairie.mad/JVM-Garbage-Collectors__PS MarkSweep__total-time-ms.csv" | stats latest(value) as "Marksweep total time ms" by host] |join type=left max=0 host [search index = pg_idx_whse_prod_events host="*" sourcetype= PG_ST_PROBE_DATA source="/opt/redprairie/prod/prodwms/les/data/csv_probe_data/com.redprairie.mad/JVM-Garbage-Collectors__PS Scavenge__collection-count.csv" | stats latest(value) as "Scavenge collection count" by host] |join type=left max=0 host [search index = pg_idx_whse_prod_events host="*" sourcetype= PG_ST_PROBE_DATA source="/opt/redprairie/prod/prodwms/les/data/csv_probe_data/com.redprairie.mad/JVM-Garbage-Collectors__PS Scavenge__total-time-ms.csv" | stats latest(value) as "Scavenge total time ms " by host] I want to use Same index, same sourcetype, same field name but different source. I want each source field value corresponding to host. Instead of use Join in the above command kindly suggest alternate SPL to achieve this result. @gcusello
Hello Champs I've index data table change records errors B221205A 109 0 B221205B 1480 0 B221205C 3336 0 B221205D 2581 8 I also have lookup table ... See more...
Hello Champs I've index data table change records errors B221205A 109 0 B221205B 1480 0 B221205C 3336 0 B221205D 2581 8 I also have lookup table that contains File_name Remote_file records $APPLXYZ.C221205A /APPLABC/B123/OUT/C221205A 109 $APPLXYZ.C221205D /APPLABC/B123/OUT/C221205D 2581 $APPLXYZ.C221205C /APPLABC/B123/OUT/C221205C 3336 /APPLABC/B123 /APPLABC/B123/OUT/C221205B 1480 I am looking for the result File_name Remote_file records change errors $APPLXYZ.C221205A /APPLABC/B123/OUT/C221205A 109 B221205A 0 $APPLXYZ.C221205D /APPLABC/B123/OUT/C221205D 2581 B221205B 0 $APPLXYZ.C221205C /APPLABC/B123/OUT/C221205C 3336 B221205C 0 /APPLABC/B123 /APPLABC/B123/OUT/C221205B 1480 B221205D 8
Hi Need to send alert like machine investigate something and after that send alert. I mean something like gptchat talk, instead of sending tables and numbers to users. (like bot) Like this: it se... See more...
Hi Need to send alert like machine investigate something and after that send alert. I mean something like gptchat talk, instead of sending tables and numbers to users. (like bot) Like this: it seems you have issue on machine X because rate of response decreased last night at 20:00. Any idea? Thanks
Hi team - We currently use Elastic to perform log storage and alerting, but we are in the process of converting to Splunk. Currently we have some Elastic alerting that runs every five minutes, and ... See more...
Hi team - We currently use Elastic to perform log storage and alerting, but we are in the process of converting to Splunk. Currently we have some Elastic alerting that runs every five minutes, and looks for the number of calls to a specific Apigee service. It works out how many calls were made in each 1 second interval, and alerts if the traffic in one or more intervals is above a threshold. Is it possible to do the same in Splunk? Run a query on hits in the last 5 minutes, sort it to provide a count for each 1 second interval, and work out the highest count value?
I have two savedsearches savedsearch1: | basesearch | stats count by _time, LocationId savedsearch2: | basesearch | count by _time, LocationId I want to track monitoring LocationIds based on be... See more...
I have two savedsearches savedsearch1: | basesearch | stats count by _time, LocationId savedsearch2: | basesearch | count by _time, LocationId I want to track monitoring LocationIds based on below criteria 1) LocationIds which are present in savedsearch2 but not in savedsearch1 2) LocationId if present in both reports, include those LocationIds with savedsearch1 timestamp>savedsearch2 timestamp, otherwise exclude it I could get LocationIds which are only present in savedsearch2 using below query, but not able to make time comparison ################################################## | savedsearch "savedsearch1" | eval flag="match" | append maxtime=1800 timeout=1800 [ savedsearch "savedsearch2" | eval flag="metric"] | stats values(flag) as flag by LocationId | where flag="metric" and flag!="match" | table LocationId ################################################## Any help would be appreciated!
Hi, In the old XML dashboards we used to have the "x" to close the submit buttons of inputs: Whereas in Dashboard studio there isn't. Does anybody know if the button can be hidden and the ... See more...
Hi, In the old XML dashboards we used to have the "x" to close the submit buttons of inputs: Whereas in Dashboard studio there isn't. Does anybody know if the button can be hidden and the dashboard made so that the default inputs are automatically executed, without hitting submit? Thanks a lot!
Hello Splunkers !! I have attached below two screenshot related to skip searches. As per the below graph many times we have high number of skip searches. When I validated those I seen that workload... See more...
Hello Splunkers !! I have attached below two screenshot related to skip searches. As per the below graph many times we have high number of skip searches. When I validated those I seen that workload_pool are not assigned to many saved searched ( attached in second screenshot ). My thought here : Because If so many searches are triggering on the same time and there is no workload_pool setting assigned then it will impact in the search performance and increase the value of skip ratio. Please let me know I am thinking on a right way ? If not please guide me or suggest me some good workarounds. I know there many blogs available on this. But please do share , if any specific suggestion on this.
Hello, My Splunk Enterprise is no longer syncing Tenable Data. Please help. Thank you.
Hello, Where do I find information on how to troubleshoot the below error: 2022-12-05 15:21:53,383+0000 INFO pid=299674 tid=MainThread file=threatmatch.py:run:404 | status="This modular input does... See more...
Hello, Where do I find information on how to troubleshoot the below error: 2022-12-05 15:21:53,383+0000 INFO pid=299674 tid=MainThread file=threatmatch.py:run:404 | status="This modular input does not execute on a search head cluster member" msg="will_execute="False" config="SHC" msg="Deselected based on SHC master selection algorithm." master_host="None" use_alpha="None" exclude_master="None"" The sourcetype is in the _internal index and the sourcetype=threatintel:threatmatch I have a hard time finding documentation that points me to a solution.
Hi Splunkers, I use many alerts where the result contains the username. Then a map search looks for this user, in the user list index, checks the group memberships, and will send the alert to the c... See more...
Hi Splunkers, I use many alerts where the result contains the username. Then a map search looks for this user, in the user list index, checks the group memberships, and will send the alert to the corresponding IT department. (there are many countries and there is a lookup that tells the support email by the user's country group). If the user is not a member of any country-group support email eval to the central one. That is working fine... until the user is in the user's index. If the user cannot be found there, the whole search is not working. Example: index=logons action=failure | stats dc(action) as failures by username | where failures > 20 | map maxsearches=50 search=" search index=users user=\"$username$\" | spath memberOf{}.displayName output=groupName | eval username=\"$usernam$\", failures=\"$failures$\" | lookup support.csv group as groupName output support" | eval support = if(isnull(support) OR support="", "central@example.com", support) | table username, failures, support So if a user failed to log in more than 20 times the alert triggers and sends an email to support - assigned by the group membership, if there is no membership, it will send to central IT. If the user cannot be found in index=users for some reason, the alert will not trigger at all. I would like it if the alert triggers and send to central@example.com (since a not existing user, has no group ...) with the username from the base search included.
Hello, in previous XML-based dashboard approach in Splunk, I was able to hide charts, based on a token value with something like: <chart depends="$showTypeCharts$"> However, with the new JSON-base... See more...
Hello, in previous XML-based dashboard approach in Splunk, I was able to hide charts, based on a token value with something like: <chart depends="$showTypeCharts$"> However, with the new JSON-based dashboards, I'm not able to hide a visualization. How do I do that?
I am forwarding F5 logs from a syslog server, but I have an additional timestamp and host IP (log below with strike-through). I would like to remove these at index time. I am trying to accomplish thi... See more...
I am forwarding F5 logs from a syslog server, but I have an additional timestamp and host IP (log below with strike-through). I would like to remove these at index time. I am trying to accomplish this using SEDCMD. My Regex test is good and I've also used several iterations of regex to try and accomplish this. Any ideas on what I am doing wrong? Location: opt/splunk/etc/apps/search/local/props.conf [f5-apm] category = Network & Security pulldown_type = 1 SEDCMD-noheader = /s^\w+\s+\d+\s+\d+:\d+:\d+\s+\d+\.\d+\.\d+\.\d+\s+//g Dec 5 09:45:55 172.16.97.188 Dec 5 09:45:45 gg-f5-02.domain.org notice tmm1[24012]: 01490500:5: /dmz/VPNClient_access_policy:dmz:17709577: New session from client IP 54.244.52.193 (ST=Oregon/CC=US/C=NA) at VIP 172.16.253.152 Listener /dmz/apm_vpn_vs_https (Reputation=Unknown)
My search is not working. I want to get Hit per minutes like this But my search dont have any about that:
Hi All, I need your help to determine the details of issues which affect users while running SPL. The details may include errors, their respective SPL, date-timestamp of occurrence and any other in... See more...
Hi All, I need your help to determine the details of issues which affect users while running SPL. The details may include errors, their respective SPL, date-timestamp of occurrence and any other information that can be fetched and used to resolve those issues. So, far I have tried the below: - 1. Fetching the saved search name and their errors "index=_internal source=*scheduler.log search_type=scheduled |stats count BY savedsearch_name, reason" 2. Fetching list of errors for all saved searches "index=_internal source=*scheduler.log search_type=scheduled |stats count BY reason" Is there any other SPL that can be built and used to get more errors which are not covered by the above? For example, errors such as: - Scheduled searches with syntax errors Corrupted data And, how to fetch errors for SPLs which are executed by end users on ad-hoc basis? Additionally, it would be helpful if you could share the approach to determine which index fails the most over a period of time. Thank you