All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, Could you tell me what is priority? capabilities explicitly enabled/disabled or from inherited roles? I had to manually edit etc/system/local/authorize.conf (clustered environment) on edit_c... See more...
Hello, Could you tell me what is priority? capabilities explicitly enabled/disabled or from inherited roles? I had to manually edit etc/system/local/authorize.conf (clustered environment) on edit_correlationsearches = enabled (was disabled) even if it had ess_admin, ess_analyst and power inherited. Thanks for your help.  
I have a KV Store with replicate turned on, a lookup definition with WILDCARD(match_field), and an automatic configured to output a numeric lookup_field. When I run a search on the relevant source ty... See more...
I have a KV Store with replicate turned on, a lookup definition with WILDCARD(match_field), and an automatic configured to output a numeric lookup_field. When I run a search on the relevant source type, I see the lookup_field. However, when I search with the lookup_field (e.g., "lookup_field=1"), the search finishes quickly and doesn't return anything. This is an example of the lookup. mac,exception 00ABCD*,1 11EEFF*,1 This is an example of the lookup definition. WILDCARD(mac) This is an example of the automatic lookup. lookup mac_addresses mac OUTPUT exception Here is an example of a search that does not return the expected results: index=mac_index exception=1 Here's what's really strange. It works for some events, but not others. When I run this, I get five events earliest=7/29/2024:00:00:00 latest=7/30/2024:00:00:00 index=logs exception=1 When I run this (adding the manual lookup), I get 109 (which is accurate). earliest=7/29/2024:00:00:00 latest=7/30/2024:00:00:00 index=logs | lookup exception_lookup mac OUTPUTNEW exception | search exception=1 Any ideas of what could cause this? Any ideas on how to troubleshoot it?
I have a saved search which is scheduled for every 17mins with time range of last 7 days. instead of getting results with last 7 days of data, can i get only last 15 or 20mins data and i should not c... See more...
I have a saved search which is scheduled for every 17mins with time range of last 7 days. instead of getting results with last 7 days of data, can i get only last 15 or 20mins data and i should not change the time range in saved search from last 7 days . 
Hi, I have a group field "bin" and a query that takes index=myindex response_code!=00. I'm not sure how to create an alert to warn when there is an x percentage increase from day to day on any of th... See more...
Hi, I have a group field "bin" and a query that takes index=myindex response_code!=00. I'm not sure how to create an alert to warn when there is an x percentage increase from day to day on any of the bins. I tried something along these lines, but could not get the prev_error_count to populate:       index=myindex sourcetype=trans response_code!=00 | bin _time span=1d as day | stats count as error_count by day, bin | streamstats current=f window=2 last(error_count) as prev_error_count by bin | eval perc_increase = error_count / prev_error_count)*100, 2) | table perc_increase        
I have a set of data which comes from two indexes . It looks more or less like below: (index="o_a_p") OR ( index="o_d_p" ) ```a ``` | eval ca = substr(c_u,2,length(c_u))    ``` transformation of... See more...
I have a set of data which comes from two indexes . It looks more or less like below: (index="o_a_p") OR ( index="o_d_p" ) ```a ``` | eval ca = substr(c_u,2,length(c_u))    ``` transformation of oap index`` ```d ``` | eval e_d = mvindex(split(ed, ","), 0)  ``` transformation of odp index``` | eval cd = mvindex(split(Rr, "/") ,0) | eval AAA=c_e.":".ca | eval DDD=e_d.":".cd | eval join=if(index="o_a_p",AAA,DDD)  ``` join field``` | stats dc(index) AS count_index values(Op) as OP values(t_t) as TT BY join | where count_index=2 so now , how to create timechart based on fields which comes from stats ? There is no _time field there K.  
Have just done a fresh install of Splunk 9.3.0 with Security Essentials. I'm getting the following message Error in 'sseidenrichment' command: (AttributeError) module 'time' has no attribute 'clock... See more...
Have just done a fresh install of Splunk 9.3.0 with Security Essentials. I'm getting the following message Error in 'sseidenrichment' command: (AttributeError) module 'time' has no attribute 'clock' Can you help?
Hi all, Today I've updated Splunk from version 9.2.2 to 9.3.0. All seems to be good except Alert Manager Enterprise 3.0.8 ( is not working anymore.) I'm kinda new into splunk, so I don't know wher... See more...
Hi all, Today I've updated Splunk from version 9.2.2 to 9.3.0. All seems to be good except Alert Manager Enterprise 3.0.8 ( is not working anymore.) I'm kinda new into splunk, so I don't know where to start. The error we got is:  Unable to initialize modular input "tag_keeping" defined in the app "alert_manager_enterprise": Introspecting scheme=tag_keeping: script running failed (PID 4085525 exited with code1) Please help me Kind regards, Glenn
Hello, I have a app and this is its default XML <nav search_view="search" color="#65A637"> <view name="search" default="true"/> <view name="securehr"/> <view name="secure_group_members"/> <view... See more...
Hello, I have a app and this is its default XML <nav search_view="search" color="#65A637"> <view name="search" default="true"/> <view name="securehr"/> <view name="secure_group_members"/> <view name="changes_on_defined_secure_groups"/> <view name="group_membership"/> <view name="group_membership_v2"/> <collection label="Reports"> <view name="SecureFolder-Report for APT" >Report for APT</view> <view name="SecureFolder-Report for Betriebsrat" default="true">Report for Betriebsrat</view> <view name="SecureFolder-Report for CMA" default="true">Report for CMA</view> <view name="SecureFolder-Report for HR" default="true">Report for HR</view> <view name="SecureFolder-Report for IT" default="true">Report for IT</view> <view name="SecureFolder-Report for QUE" default="true">Report for QUE</view> <view name="SecureFolder-Report for Vorstand" default="true"> Report for Vorstand</view> </collection> </nav> Now from this xml ir should show 7 sections on navigation panel. But it does not show report section , can anyone help?
Hi,   This thing is getting me crazy. I am running Splunk 9.2.1 and I have the following table: amount compare frac_type fraction integer 0.41 F Number 0.41 0 4.18 F Number 0.... See more...
Hi,   This thing is getting me crazy. I am running Splunk 9.2.1 and I have the following table: amount compare frac_type fraction integer 0.41 F Number 0.41 0 4.18 F Number 0.18 4 0.26 F Number 0.26 0 0.34 F Number 0.34 0 10.60 F Number 0.60 10 0.11 F Number 0.11 0 2.00 F Number 0.00 2 3.49 F Number 0.49 3 10.58 F Number 0.58 10 2.00 F Number 0.00 2 1.02 F Number 0.02 1 15.43 F Number 0.43 15 1.17 F Number 0.17 1   And these are the evals I used to calculate the fields: | eval integer = floor(amount) | eval fraction = amount - floor(amount) | eval frac_type = typeof(fraction) | eval compare = if(fraction = 0.6, "T", "F")   Now, I really can't understand how the "compare" field is always false.... I was expecting it to output TRUE on row 5 with amount = 10.60, which means fraction = 0.6, but it does not. What am I doing wrong here? Why "compare" evaluates to FALSE on row 5? I tried to change 0.6 with 0.60 (you never know), but no luck.   If you want you can try this run anywhere search, which gives me the same result:   | makeresults | eval amount = 10.6 | eval integer = floor(amount) | eval fraction = amount - floor(amount) | eval frac_type = typeof(fraction) | eval compare = if(fraction = 0.6, "T", "F")   Can you help me?     Thank you in advance, Tommaso
Hi Based on a Multiselect  reading from   index="pm-azlm_internal_prod_events" sourcetype="azlm"   I define a token with the name   opc_t     This token can be used without any problems to ... See more...
Hi Based on a Multiselect  reading from   index="pm-azlm_internal_prod_events" sourcetype="azlm"   I define a token with the name   opc_t     This token can be used without any problems to filter further down in the dashboard data read from the same index (top 3 lines in the code below).    <query>index="pm-azlm_internal_prod_events" sourcetype="azlm" $opc_t$ $framenum$ | strcat opc "_" frame_num UNIQUE_ID | dedup _time UNIQUE_ID | append [ search index="pm-azlm_internal_dev_events" sourcetype="azlm-dev" ocp=$opc_t|s$ | strcat ocp "-j_" fr as UNIQUE_ID | dedup UNIQUE_ID] | timechart span=12h aligntime=@d limit=0 count by UNIQUE_ID | sort by _time DESC </query>   BUT and here's my problem: using the same token on a different index (used in the append above) will provide no results at all.  One (nasty) detail, the field names in both Indexes are slightly different. In    index="pm-azlm_internal_prod_events"   the field name I need to filter on ist called    opc     In the second index   pm-azlm_internal_dev_events   the field name is   ocp     Dear Experts: what do I need to change on the 2nd query, to be able to use the same token for filtering?  
88
I nabbed some searches from our license server/monitoring console and placed them in the search head cluster so that they would be available to some users which should not have access to the monitori... See more...
I nabbed some searches from our license server/monitoring console and placed them in the search head cluster so that they would be available to some users which should not have access to the monitoring console. The resulting dashboard overview would benefit (heavily) from a "base search" to feed the different panels. However, some of them use "subsearches" and I cannot figure out if i can and then how to combine the two. There are a couple of these searches where you pull some license usage data and available license for different pools or the total license available (hence using the stacksz when checking "all" pools ("*")).   index=_internal source=*license_usage.log* type="RolloverSummary" pool=$pool$ | bin _time span=1d | stats latest(b) AS b by slave, pool, _time | timechart span=1d sum(b) AS "volume" by pool fixedrange=false | join type=outer _time [ search index=_internal source=*license_usage.log* type="RolloverSummary" pool=$pool$ | bin _time span=1d | dedup _time stack | eval licenzz=if("$pool$"=="*", stacksz, poolsz) | stats latest(licenzz) AS "Available license" by _time ] | fields - Temp | foreach * [ eval &lt;&lt;FIELD&gt;&gt;=round('&lt;&lt;FIELD&gt;&gt;'/1024/1024/1024, 3) ]   Different panels use different "stats" and "evals", different "AS" naming and more. There is however one consistent part, the initial search: index=_internal source=*license_usage.log* type="RolloverSummary" pool=$pool$ I figured it would be a good ide to use a base search with this, though I cannot figure out how. Using a larger search including the join and subsearch "sort of works". But getting all the different "stats", "evals" and "AS" to produce the expected output is a nightmare. The initial and smaller base search above is the smallest common denominator. But then I cant figure out how to reference this base in the subsearch for the join? All suggestions are welcome. All the best
The GWS is running for the whole company. Is it possible to only input a part of users' logs into Splunk, using add-on for GWS or some filter function somewhere? If I only want to monitor members f... See more...
The GWS is running for the whole company. Is it possible to only input a part of users' logs into Splunk, using add-on for GWS or some filter function somewhere? If I only want to monitor members from a specific department of my organization, how can I filter on GWS? I think logs could be filtered while sending to Splunk by GCP, but what about directly using GWS add-on? Maybe this question is more about Google Service.... Anyone familiar please kindly help, thank you! 
My data has a tables{}.values{} containing a list of lists. Within each list there is data. Sample data below. When I try to extract this search to csv via job id, it's not containing tables{}.values... See more...
My data has a tables{}.values{} containing a list of lists. Within each list there is data. Sample data below. When I try to extract this search to csv via job id, it's not containing tables{}.values{} data within a single cell and instead treating it as it's own field as it's comma delimited. How can I keep all the data within a singular field when exporting to CSV?   Sample data: test@email.com The following was found, on website: google.com, by test user, with id:testuser, extracted from test.txt, on date testdate,another test field. 
Currently, my sourcetype contains a mix of bank logs and card logs. I would like to categorize this into `index=bank` and `index=card` respectively. Currently, the search is done with index=main, ... See more...
Currently, my sourcetype contains a mix of bank logs and card logs. I would like to categorize this into `index=bank` and `index=card` respectively. Currently, the search is done with index=main, and all data is displayed. If index=bank, I want only bank-related logs to be output. We set the forwarder as follows and created bank, card, and error indexes on the server that will receive the data. This is the code I have written so far... I need help,,,,,   splunk@heavy-forwarder:/opt/splunk/etc/apps/search/local:> cat inputs.conf [monitor:///opt/splunk/var/log/splunk/test.log] disabled = false host = heavy-forwarder sourcetype = test crcSalt = <SOURCE>   splunk@heavy-forwarder:/opt/splunk/etc/system/local:> cat props.conf [test] TRANSFORM-routing=bankRouting,cardRouting,errorRouting splunk@heavy-forwarder:/opt/splunk/etc/system/local:> cat transform.conf [bankRouting] REGEX=bank DEST_KEY =_INDEX FORMAT = bankGroup [cardRouting] REGEX=card DEST_KEY =_INDEX FORMAT = cardGroup [errorGroup] REGEX=error DEST_KEY =_INDEX FORMAT = errorGroup splunk@heavy-forwarder:/opt/splunk/etc/system/local:> cat outputs.conf [tcpout:bankGroup] server = 192.168.111.153:9997 [tcpout:cardGroup] server = 192.168.111.151:9997 [tcpout:errorGroup] server = 192.168.111.152:9997  
Hi @Team,   Could you please help me on looping over inputs in splunk soar. my requirement: I am having input like this , input=['a','b','c','d'] I need to run query on each value from input li... See more...
Hi @Team,   Could you please help me on looping over inputs in splunk soar. my requirement: I am having input like this , input=['a','b','c','d'] I need to run query on each value from input like first it must take 'a' value and run query then from run query result i need to take sys id and pass it to create ticket. Note: we are using 6.1.1(On-prem) Please help me on this    Regards, Harish
I Have ServiceNames (A, B ,C ,D, E,  F, G, H)  but want  (C ,D, E,  F, G, H ) ServiceNames combined results and renamed as "Other_Services"  My base search | rex "^[^=\n]*=(?P<ServiceName>[^,]+)" |... See more...
I Have ServiceNames (A, B ,C ,D, E,  F, G, H)  but want  (C ,D, E,  F, G, H ) ServiceNames combined results and renamed as "Other_Services"  My base search | rex "^[^=\n]*=(?P<ServiceName>[^,]+)" | rex "TimeMS\s\=\s(?<Trans_Time>\d+)"   Required Results ServiceName         Trans_Time Count A 60 1111 B 40 1234 Other_Services( C , D, E, F,G,H) 25 1234567
Hello,    I'm new to AppDynamics world. When tried to create a platform after the installation (as the messages attached below) with the following command, and I got an error message next. Can anyon... See more...
Hello,    I'm new to AppDynamics world. When tried to create a platform after the installation (as the messages attached below) with the following command, and I got an error message next. Can anyone advise me how to resolve this issue? Thanks.  -- Jonathan Wang, 2024/07/30 Command ==> [root@appd-server platform-admin]# bin/platform-admin.sh create-platform --name myappd --installation-dir /usr/local/appdynamics/platform2/ IOException while parsing API response: Failed to connect to appd-server/fe80:0:0:0:be24:11ff:fed4:bf11%2:9191 ================== Installation step, and associated log below. ========== I finished AppDynamics installation with the following command (on Rocky Linux 9.4):    ./platform-setup-x64-linux-21.4.4.24619.sh   and got the following complete messages: Installing Enterprise Console Database. Please wait as this may take a few minutes... Installing Enterprise Console Database... Installing Enterprise Console Application. Please wait... Installing Enterprise Console Application... Creating Enterprise Console Application login... Copying timezone scripts to mysql archives... Creating Enterprise Console Application login... Setup has finished installing AppDynamics Enterprise Console on your computer. To install and manage your AppDynamics Platform, use the Enterprise Console CLI from /usr/local/appdynamics/platform2/platform-admin/bin directory. Finishing installation ...
If I run the below code I am getting events in output json file , if I want to get statistics , is there any api available  if I want to get error count and stdev in json file , how can I use the ... See more...
If I run the below code I am getting events in output json file , if I want to get statistics , is there any api available  if I want to get error count and stdev in json file , how can I use the python code to get these values   payload=f'search index="prod_k8s_onprem_vvvb_nnnn" "k8s.namespace.name"="apl-siii-iiiii" "k8s.container.name"="uuuu-dss-prog" NOT k8s.container.name=istio-proxy NOT log.level IN(DEBUG,INFO) (error OR exception)(earliest="07/25/2024:11:30:00" latest="07/25/2024:12:30:00")\n' '| addinfo\n' '| bin _time span=5m@m\n' '| stats count(eval(log.level="ERROR")) as error_count by _time\n' '| eventstats stdev(error_count)' print(payload) payload_escaped = f'search={urllib.parse.quote(payload)}' headers = { 'Authorization': f'Bearer {splunk_token}', 'Content-Type': 'application/x-www-form-urlencoded' } url = f'https://{splunk_host}:{splunk_port}/services/search/jobs/export?output_mode=json' response = requests.request("POST", url, headers=headers, data=payload_escaped, verify=False) print(f'{response.status_code=}') txt = response.text if response.status_code==200: json_txt = f'[\n{txt}]' os.makedirs('data', exist_ok=True) with open("data/output_deploy.json", "w") as f: f.write(json_txt) f.close() else: print(txt)  
After upgrading my deployment server to Enterprise 9.2.2 the clients are no longer connecting to the deployment server. When I launch my DS UI and check for clients connecting, it says 0. Has anyone ... See more...
After upgrading my deployment server to Enterprise 9.2.2 the clients are no longer connecting to the deployment server. When I launch my DS UI and check for clients connecting, it says 0. Has anyone had this issue?