All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

hello i have this search | inputlookup lkp-all-findings | lookup lkp-findings-blacklist.csv blfinding as finding OUTPUTNEW blfinding | lookup lkp-asset-list-master "IP Adresse" as ip OUTPUTNEW A... See more...
hello i have this search | inputlookup lkp-all-findings | lookup lkp-findings-blacklist.csv blfinding as finding OUTPUTNEW blfinding | lookup lkp-asset-list-master "IP Adresse" as ip OUTPUTNEW Asset_Gruppe Scan-Company Scanner Scan-Location Location "DNS Name" as dns_name Betriebssystem as "Operation System" | lookup lkp-GlobalIpRange.csv 3-Letter-Code as Location OUTPUTNEW "Company Code" | dedup finding, dns_name, ip | stats values("Company Code") as "Company Code" by finding, dns_name, ip, Asset_Gruppe, Scan-Company, Scanner, Scan-Location, Location, Betriebssystem now this is the result. now i have tried mvexpand , stats as well but it gives multiples values. The problem is let say for NessusHost "slo-svenessus01.emea.durr.int" there are let say 20 nessus host with this name now it is duplicating 20 "company code " (HHDE) in every single field for each Nessushost with this name and same for others as well.
I do have a solution for this, but I just wonder if there is a more straight forward approach to get a better understanding of multi search scenarios. I want to monitor which Windows forwarders have... See more...
I do have a solution for this, but I just wonder if there is a more straight forward approach to get a better understanding of multi search scenarios. I want to monitor which Windows forwarders have broken performance counters or are just not sending in performance counters for whatever reason. There's a CSV lookup file with the server names I want to monitor, and my idea was to have the search give me a table of all the servers in that lookup file which come back with 0 results for a given search. My working solution is this: | inputlookup domaincontrollers.csv | table Name | eval count=0 | append [search index=perfmon counter="% Processor Time" | rename host as Name | stats count by Name] | stats sum(count) by Name | rename "sum(count)" as activity | where activity=0 I had played with appendcols, but found that it would only merge the servers with results in the subsearch, and not list the others in the results. Is there any search method I should read up on, for a scenario like this? thanks
Hi everyone, I'm using Splunk Cloud with the Splunk Add-on for Microsoft Cloud Services  to manage two Azure subscriptions. As a result, I have duplicated inputs, and I need a way to reference each ... See more...
Hi everyone, I'm using Splunk Cloud with the Splunk Add-on for Microsoft Cloud Services  to manage two Azure subscriptions. As a result, I have duplicated inputs, and I need a way to reference each subscription within my queries. I noticed that the subscriptionId field exists, but it contains four variations: two in lowercase and two in uppercase. I'd like to normalize this field to lowercase at ingest time, so I don't have to handle it manually in every query. I checked the Field Transformations, but I couldn't find any mention of subscriptionId (I only see subscription_id). Has anyone dealt with a similar issue, or can anyone suggest the best approach? Thanks in advance for your help! (P.S. I'm relatively new to Splunk and Splunk Cloud, so any guidance is greatly appreciated!)
We received all alerts from Splunk Cloud with sender alerts@splunkcloud.com. Can we change the sender to other domain? E.g. xxx@xxx.abc Do we need to raise a support ticket to have a change reque... See more...
We received all alerts from Splunk Cloud with sender alerts@splunkcloud.com. Can we change the sender to other domain? E.g. xxx@xxx.abc Do we need to raise a support ticket to have a change request on it?  
I have a problem with the splunk classic dashboard that I have created, where the problem is that the table dashboard that I have created is not connected properly to the dropdown that I have create... See more...
I have a problem with the splunk classic dashboard that I have created, where the problem is that the table dashboard that I have created is not connected properly to the dropdown that I have created, as an example I provide the source of the dashboard that I have created as follows: <input type="text" token="end_id" searchWhenChanged="true"> <label>End To End Id</label> <default>*</default> </input> <input type="dropdown" token="code_cihub"> <label>Code Transaction CI HUB</label> <choice value="*">All</choice> <default>*</default> <fieldForLabel>code_cihub</fieldForLabel> <fieldForValue>code_cihub</fieldForValue> <search> <query>index="x" | where isnotnull(StatusTransactionBI) | eval "Status Transaction CI HUB" = if(StatusTransactionBI == "U000", "Success", "Failed") | lookup statust_description.csv code as StatusTransactionBI OUTPUT description | rename EndtoendIdOrgnlBI as "End To End Id", StatusTransactionBI as "Code Transaction CI HUB", description as "Description CI HUB" | dedup "End To End Id" | join type=outer "End To End Id" [search index="x" | where isnotnull(StatusTransactionOrgnl) | eval "Info Transaction CI HUB"=case(AddtionalOrgnl == "O 123", "Normal Transaction", AddtionalOrgnl == "O 70", "Velocity Transaction", AddtionalOrgnl == "O 71", "Gambling RFI", AddtionalOrgnl == "O 72", "Gambling OFI", AddtionalOrgnl == "O 73", "DTTOT Transaction", true(), "Other" ) | rename EndtoendIdOrgnl as "End To End Id" | search "Info Transaction CI HUB"="$info$" ] | search "End To End Id"="$end_id$" "Status Transaction CI HUB"="$status_cihub$" | stats count by "Code Transaction CI HUB" | rename "Code Transaction CI HUB" as code_cihub</query> <earliest>$time.earliest$</earliest> <latest>$time.latest$</latest> </search> </input> <input type="dropdown" token="info"> <label>Info Transaction CI HUB</label> <choice value="*">All</choice> <choice value="O 70">Velocity Transaction</choice> <choice value="O 71">Gambling RFI</choice> <choice value="O 72">Gambling OFI</choice> <choice value="O 73">DTTOT Transaction</choice> <default>*</default> <fieldForLabel>info</fieldForLabel> <fieldForValue>info</fieldForValue> <search> <query>index="x" | where isnotnull(StatusTransactionBI) | eval "Status Transaction CI HUB" = if(StatusTransactionBI == "U000", "Success", "Failed") | lookup statust_description.csv code as StatusTransactionBI OUTPUT description | rename EndtoendIdOrgnlBI as "End To End Id", StatusTransactionBI as "Code Transaction CI HUB", description as "Description CI HUB" | dedup "End To End Id" | join type=outer "End To End Id" [search index="x" | where isnotnull(StatusTransactionOrgnl) | eval "Info Transaction CI HUB"=case(AddtionalOrgnl == "O 123", "Normal Transaction", AddtionalOrgnl == "O 70", "Velocity Transaction", AddtionalOrgnl == "O 71", "Gambling RFI", AddtionalOrgnl == "O 72", "Gambling OFI", AddtionalOrgnl == "O 73", "DTTOT Transaction", true(), "Other" ) | rename EndtoendIdOrgnl as "End To End Id" ] | search "End To End Id"="$end_id$" "Status Transaction CI HUB"="$status_cihub$" | stats count by "Info Transaction CI HUB" | rename "Info Transaction CI HUB" as info</query> <earliest>$time.earliest$</earliest> <latest>$time.latest$</latest> </search> </input> <input type="dropdown" token="status_cihub"> <label>Status Transaction CI HUB</label> <choice value="*">All</choice> <default>*</default> <fieldForLabel>status_cihub</fieldForLabel> <fieldForValue>status_cihub</fieldForValue> <search> <query>index="x" | where isnotnull(StatusTransactionBI) | eval "Status Transaction CI HUB" = if(StatusTransactionBI == "U000", "Success", "Failed") | lookup statust_description.csv code as StatusTransactionBI OUTPUT description | rename EndtoendIdOrgnlBI as "End To End Id", StatusTransactionBI as "Code Transaction CI HUB", description as "Description CI HUB" | dedup "End To End Id" | join type=outer "End To End Id" [search index="x" | where isnotnull(StatusTransactionOrgnl) | eval "Info Transaction CI HUB"=case(AddtionalOrgnl == "O 123", "Normal Transaction", AddtionalOrgnl == "O 70", "Velocity Transaction", AddtionalOrgnl == "O 71", "Gambling RFI", AddtionalOrgnl == "O 72", "Gambling OFI", AddtionalOrgnl == "O 73", "DTTOT Transaction", true(), "Other" ) | rename EndtoendIdOrgnl as "End To End Id" | search "Info Transaction CI HUB"="$info$" ] | search "End To End Id"="$end_id$" "Code Transaction CI HUB"="$code_cihub$" | stats count by "Status Transaction CI HUB" | rename "Status Transaction CI HUB" as status_cihub</query> <earliest>$time.earliest$</earliest> <latest>$time.latest$</latest> </search> </input> <row> <panel> <table> <title>Monitoring Response</title> <search> <query>index="x" | where isnotnull(StatusTransactionBI) | eval "Status Transaction CI HUB" = if(StatusTransactionBI == "U000", "Success", "Failed") | lookup statust_description.csv code as StatusTransactionBI OUTPUT description | rename EndtoendIdOrgnlBI as "End To End Id", StatusTransactionBI as "Code Transaction CI HUB", description as "Description CI HUB" | dedup "End To End Id" | join type=outer "End To End Id" [search index="x" | where isnotnull(StatusTransactionOrgnl) | eval "Info Transaction CI HUB"=case(AddtionalOrgnl == "O 123", "Normal Transaction", AddtionalOrgnl == "O 70", "Velocity Transaction", AddtionalOrgnl == "O 71", "Gambling RFI", AddtionalOrgnl == "O 72", "Gambling OFI", AddtionalOrgnl == "O 73", "DTTOT Transaction", true(), "Other" ) | rename EndtoendIdOrgnl as "End To End Id" | search "Info Transaction CI HUB"="$info$" ] | search "End To End Id"="$end_id$" "Code Transaction CI HUB"="$code_cihub$" "Status Transaction CI HUB"="$status_cihub$" | table _time, "End To End Id", "Code Transaction CI HUB", "Info Transaction CI HUB", "Status Transaction CI HUB", "Description CI HUB" | sort - _time</query> <earliest>$time.earliest$</earliest> <latest>$time.latest$</latest> </search> </table> </panel> </row> the main problem I'm facing is, on the “Info Transaction CI HUB” dropdown that I made static, where if I select one of the values, the contents in the “Monitoring Response” table do not change according to the dropdown value of “Info Transaction CI HUB” that I have selected before. please help me to solve the problem Thank you
I wanted to add same base configuration for workstations and have serverclasses divided by organizations but base app would be same on everyone. Now I have problem:  When you make changes (add host... See more...
I wanted to add same base configuration for workstations and have serverclasses divided by organizations but base app would be same on everyone. Now I have problem:  When you make changes (add host through webgui to one serverclass) and click save, it changes bundle epoch time under global_bundles and then other serverclasses say that file does not exist on server when clients try to download app. And then if I run reload deploy-server it's fine again. But everytime if I need to add client on any workstation serverclass it breaks all other serverclasses. It's pretty rough to run reload deploy-server command everytime because there will be pretty high load on the DS.  Is there any other way to handle this than making class-specific base apps? Running 9.4.1 12vCPU/12GB RAM. 
I maintain an app on Splunk, the AbuseIPDB App. This app uses a collection that holds a set of key-value pairs for things like user state and settings, and it's looked up on every command (i.e. abuse... See more...
I maintain an app on Splunk, the AbuseIPDB App. This app uses a collection that holds a set of key-value pairs for things like user state and settings, and it's looked up on every command (i.e. abuseipdbcheck ip="127.0.0.1"). We had been receiving bug reports about a KeyError that seemed to have been fixed by setting replicate=true for the collection. I suppose that because the app's configuration collection was not being replicated, distributed searches failed (since the configuration collection was not being found on the individual search peers?, hence the KeyError). However, I've just received another report, with the same issue, this time from a Splunk Cloud Victoria setup. The collection does have replicate=true. Can anyone give some guidance on this?
I am searching for a key:value report app where the values are inconsistent but include a report cluster name consistently. Example of key:value APP_Details:{"CLUSTER_VIP":"CLUSTERX.URL.COM","Acces... See more...
I am searching for a key:value report app where the values are inconsistent but include a report cluster name consistently. Example of key:value APP_Details:{"CLUSTER_VIP":"CLUSTERX.URL.COM","Access":true} There are over 100 APP_Details values for CLUSTERX. How can I extract CLUSTERX (there are three different cluster names) to show as a single value by cluster? Thanks
Lately our searchheads will run into issues where the srtemp folder baloons to 80+GB and fills the local hard drive.  To temporarily fix, I have to shut down splunk and then run the splunk command: ... See more...
Lately our searchheads will run into issues where the srtemp folder baloons to 80+GB and fills the local hard drive.  To temporarily fix, I have to shut down splunk and then run the splunk command: clean-srtemp  It will then be good for several days to a week and eventually it's back again, rinse and repeat.  I'm curious what my options are for determining what is causing the srtemp folder to fill consistently?
I have a field that I need to search on that is a long string of comma-separated values.  It comes from our vulnerability scanner tool, Qualys, and looks something like this: "OS: Windows 10 22H2, P... See more...
I have a field that I need to search on that is a long string of comma-separated values.  It comes from our vulnerability scanner tool, Qualys, and looks something like this: "OS: Windows 10 22H2, Port: 53, AV: Installed, SW: Maya, SVC: SiegeTower" I have a multiselect dropdown on the dashboard with each unique tag that I want my users to be able to select any/all tags that matter to them.  Application owners may only be concerned about viewing data related to their particular service on a particular operating system (user selects "OS: Windows 10 22H2" and "SVC: SiegeTower" for example). The problem I'm running into is when users select multiple tags, the search looks like this: <base search> | search TAGS IN ("OS: Windows 10 22H2","SVC: SiegeTower") | ... which returns zero results. What I really need is: <base search> | search TAGS IN ("*OS: Windows 10 22H2*","*SVC: SiegeTower*") | ... Which has wildcard characters on each search selection and does return the correct results. Is there any way to add wildcards to the multiselect dropdown selections to get the right results?  The only other option I tried a combination of split and mvexpand on the TAGS field to perform the search but between thousands of endpoints and dozens of tags, I ran into memory issues that I won't be able to overcome any time soon. Any help here is appreciated!
Is there any documentation on creating an input for this app? (https://splunkbase.splunk.com/app/6608) I installed the app. Upon launching, it's asking for certificate and private key. There is no... See more...
Is there any documentation on creating an input for this app? (https://splunkbase.splunk.com/app/6608) I installed the app. Upon launching, it's asking for certificate and private key. There is no place for me to configure the API endpoint. thanks,
Hello everyone, I’ve encountered a problem while setting up a correlation search. For instance, when I use the following query: index=windows AND EventCode=4624 I end up getting multiple alerts. ... See more...
Hello everyone, I’ve encountered a problem while setting up a correlation search. For instance, when I use the following query: index=windows AND EventCode=4624 I end up getting multiple alerts. To refine this, I attempted to add a Drill Down Search like this: index=windows AND EventCode=4624 host="$host$" However, this returns no results. Does anyone have suggestions or ideas that might help resolve this? Any input would be greatly appreciated!
Hello Splunkers! I am looking for a way to collect the SunOS-SPARC OS logs. After some research, I have tried to update the inputs.conf in the Splunk Add-on for Unix and Linux ( https://splunkbase.s... See more...
Hello Splunkers! I am looking for a way to collect the SunOS-SPARC OS logs. After some research, I have tried to update the inputs.conf in the Splunk Add-on for Unix and Linux ( https://splunkbase.splunk.com/app/833 ), as below (this is a snippet of the config file not all of it) : # Currently only supports SunOS, Linux, OSX. # May require Splunk forwarder to run as root on some platforms. [script://./bin/service.sh] disabled = 0 interval = 3600 source = Unix:Service sourcetype = Unix:Service index = os # Currently only supports SunOS, Linux, OSX. # May require Splunk forwarder to run as root on some platforms. [script://./bin/sshdChecker.sh] disabled = 0 interval = 3600 source = Unix:SSHDConfig sourcetype = Unix:SSHDConfig index = os # Currently only supports Linux, OSX. # May require Splunk forwarder to run as root on some platforms. [script://./bin/update.sh] disabled = 0 interval = 86400 source = Unix:Update sourcetype = Unix:Update index = os [script://./bin/uptime.sh] disabled = 0 interval = 86400 source = Unix:Uptime sourcetype = Unix:Uptime index = os [script://./bin/version.sh] disabled = 0 This didn't work and no logs were collected (I have made sure the user running Splunk forwarder has read privilege), is there any other recommendation?
Hi Experts, I have a scenario in which there are 10 tiers associated with an application. After updates/patching, some tiers are missing in the APpDynamics console. So is there any way we can create... See more...
Hi Experts, I have a scenario in which there are 10 tiers associated with an application. After updates/patching, some tiers are missing in the APpDynamics console. So is there any way we can create an alert when this scenario happens? Is a health rule with adding tiers manually will help to get an alert, as I don't have a real-time scenario to test. Thanks.
I have a stream of logs from a system. To filter for errors, I can perform a search like so: index=project1 sourcetype=pc1 log_data="*error*" I can use it to get errors however I also want the... See more...
I have a stream of logs from a system. To filter for errors, I can perform a search like so: index=project1 sourcetype=pc1 log_data="*error*" I can use it to get errors however I also want the events surrounding this error as well. I want to be able to get all events that occurred 1 minute before and 1 minute after (all events, not just errors).  What would be the best possible way to achieve this?
Hi, Team   i have try to integrated with thousandeyes, from TE part i confirmed the connection with Splunk is normal status, but from splunk perspective , i couldn't confirm the chart and realtime ... See more...
Hi, Team   i have try to integrated with thousandeyes, from TE part i confirmed the connection with Splunk is normal status, but from splunk perspective , i couldn't confirm the chart and realtime information, so i would like to know how i could confirm the integration if successfully or not,  and the realtime information from splunk side. thanks. 
To create a new endpoint named get_ticket_id in your Django application, follow these steps: Steps: Define a function in your views to handle the logic of accepting two strings, calling the desir... See more...
To create a new endpoint named get_ticket_id in your Django application, follow these steps: Steps: Define a function in your views to handle the logic of accepting two strings, calling the desired function, and returning the result. Create a URL route to point to the new view function. Implement the logic for the function you want to call. Example Code: views.py from django.http import JsonResponse from django.views.decorators.csrf import csrf_exempt import json # Sample function to process the two strings def process_strings(string1, string2): # Example logic to generate a ticket ID return f"Ticket-{string1[:3]}-{string2[:3]}" @csrf_exempt def get_ticket_id(request): if request.method == "POST": try: # Parse the request body data = json.loads(request.body) string1 = data.get("string1") string2 = data.get("string2") if not string1 or not string2: return JsonResponse({"error": "Both 'string1' and 'string2' are required."}, status=400) # Call the processing function ticket_id = process_strings(string1, string2) return JsonResponse({"ticket_id": ticket_id}, status=200) except json.JSONDecodeError: return JsonResponse({"error": "Invalid JSON format."}, status=400) return JsonResponse({"error": "Only POST requests are allowed."}, status=405)  
Hi Community, I have a JSON data source that I am trying to get into Splunk via a heavy Forwarder using a custom built app that uses an API call. For some reason my LINE_BREAKER seems to be getting ... See more...
Hi Community, I have a JSON data source that I am trying to get into Splunk via a heavy Forwarder using a custom built app that uses an API call. For some reason my LINE_BREAKER seems to be getting ignored every line ends and starts as follows.  myemail@this-that-theother.co"},{"specialnumber":"number"  the line break is the comma between the open and close curly braces..... IOW ,{ this is the line I am using in my props.conf LINE_BREAKER = (\,)\{\" for some reason the data continues to come in, in one big blob of multiple events.  This is my props.conf KV_MODE = json SHOULD_LINEMERGE = 0 category = something pulldown_type = 1 TZ = UTC TIME_PREFIX=\"time\"\:\" MAX_TIMESTAMP_LOOKAHEAD = 20 TIME_FORMAT =%Y-%m-%dT%H:%M:SZ TRUNCATE = 999999 LINE_BREAKER = (\,)\{\" EVENT_BREAKER_ENABLE = false Time comes in as such "time":"2025-03-25T19:36:35Z" Am I missing something? 
I have the below search and I want to modify it to get the bandwidth utilization percentage. Whats the best way to go about that and what should I add to my search?   index=snmp sourcetype=snmp_att... See more...
I have the below search and I want to modify it to get the bandwidth utilization percentage. Whats the best way to go about that and what should I add to my search?   index=snmp sourcetype=snmp_attributes Name=ifHCInOctets host=xyz | streamstats current=t global=f window=2 range(Value) AS delta BY UID | eval mbpsIn=delta*8/1024/1024 | append [search index=snmp sourcetype=snmp_attributes Name=ifHCOutOctets host=xyz | streamstats current=t global=f window=2 range(Value) AS delta BY UID | eval mbpsOut=delta*8/1024/1024 ] | search UID=1 | timechart span=5m per_second(mbpsIn) AS MbpsIn per_second(mbpsOut) AS MbpsOut BY UID
Need help cleaning up my rex command line with data delineated by (,) then extracting the value after the (=) character from fields: Location=, Computer=, User=, Date= Sample index data:         In... See more...
Need help cleaning up my rex command line with data delineated by (,) then extracting the value after the (=) character from fields: Location=, Computer=, User=, Date= Sample index data:         Index = computerlogs         Field name: CompLog Field values:          Loc=Warehouse, Comp=WH-SOC01, User= username1, Date=2025-03-18         Loc=Warehouse, Comp=WH-SOC02, User= username2, Date=2025-03-20         Loc=Warehouse, Comp=WH-SOC03, User= username1, Date=2025-03-24 Created a dashboard showing all logins with only computer name, user and date as below Working query         index= computerlogs        | rex field=CompLog"([^,]+,){1}(?<LogComp>[^,]+)"        | rex field=LogComp "\=(?<Computer>[^,]+)"        | rex field=CompLog"([^,]+,){2}(?<LogUser>[^,]+)"        | rex field=LogUser"\=(?<User>[^,]+)"        | rex field=CompLog"([^,]+,){3}(?<LogDate>[^,]+)"        | rex field=LogDate "\=(?<Date>[^,]+)"        | table Computer User Date        Computer          User                  Date        WH-SOC01       username1    2025-03-18        WH-SOC02       username2    2025-03-20        WH-SOC03       username1    2025-03-24 My ask is to clean up the above rex commands, so I only have one rex command line for each data field I am trying to capture if it is possible.  I tried to combine the two rex command lines into one.  I know I need to add the "\=" argument to get everything after the "=" character but get an error with my below try's.       | rex field=CompLog"([^,]+,){1}\=(?<Computer>[^,]+)"       | rex field=CompLog"([^,]+,){1}"\=(?<Computer>"[^,]+)"       | rex field=CompLog"([^,]+,){1}"\=(?<Computer>[^",]+)" Any help would be greatly appreciated.  Thanks.