All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Team I have question that, every month EUM licenses are refreshing and i can see valid from and to are changing with info. May i know will utilized data will reflect the same like for example,. ... See more...
Hi Team I have question that, every month EUM licenses are refreshing and i can see valid from and to are changing with info. May i know will utilized data will reflect the same like for example,. Jan -2022 i see that utilized pageview count is 400 Feb -2022 i see that utilized pageview count is 700 Mar -2022 i see that utilized pageview count is 1100 in this above scenerio, the total utilized value is 1100 pageview. so each month utilized value will remain same and new view will be added. as per my knowledge and understanding, jan month - 400 views, feb month - 300 views, and Mar month - 400 views. it will not refresh the utilized and start from scratch. Please correct me if i am wrong. thanks Jaganathan
Hi Team I have installed trial version of Splunk enterprise. It worked fine for 2 days . After that I am not able to access the Splunk url. It is giving the below error. Please help on the same T... See more...
Hi Team I have installed trial version of Splunk enterprise. It worked fine for 2 days . After that I am not able to access the Splunk url. It is giving the below error. Please help on the same This site can’t be reached 127.0.0.1 refused to connect.
I'm wanting to group streamstats results by either one or two fields. Grouping by sourcetype would be sufficient. Grouping by index and sourcetype would be ideal. This query works fine for a singl... See more...
I'm wanting to group streamstats results by either one or two fields. Grouping by sourcetype would be sufficient. Grouping by index and sourcetype would be ideal. This query works fine for a single sourcetype, however does not work for multiple sourcetypes. The desired outcome is one record per unique sourcetype and/or index. Example query: | tstats count as event_count where index="aws_p" sourcetype="aws:cloudwatch:guardduty" by _time span=1m index sourcetype | sort _time | streamstats window=1 current=false sum(event_count) as event_count values(_time) as prev_time by index sourcetype | eval duration=_time-prev_time | eval minutes_between_events=duration/60 | stats min(minutes_between_events) as min_minutes_between_events avg(minutes_between_events) as avg_minutes_between_events max(minutes_between_events) as max_minutes_between_events by index sourcetype | eval avg_minutes_between_events=round(avg_minutes_between_events,0) | eval max_hours_between_events=round(max_minutes_between_events/60,2) results for multiple sourcetypes results for a single sourcetype
Hi, I have a distributed on Prem Splunk Enterprise Deployment at 8.1.x. Splunk is running under Systemd. I recently noticed that the previous admin did not tune up the ulimits. I was wondering if... See more...
Hi, I have a distributed on Prem Splunk Enterprise Deployment at 8.1.x. Splunk is running under Systemd. I recently noticed that the previous admin did not tune up the ulimits. I was wondering if anyone knew how to tune these settings based on individual host role/hw. For instance if I override with "systemctl edit Splunkd.service" command [Service] LimitNOFILE=65535 < tech support suggestion LimitNPROC=20480 < tech support suggestion LimitDATA=(80% total RAM?)< tech support suggestion LimitFSIZE=infinity < tech support suggestion TasksMax=20480 <mirrored LimitNPROC per tech support I have read the docs and seen the default suggestions for NOFILE and NPROC but how do you determine the other limits? Specifically NPROC and DATA? ref: >>> https://docs.splunk.com/Documentation/Splunk/9.0.2/Installation/Systemrequirements#Considerations_regarding_system-wide_resource_limits_on_.2Anix_systems Thank you!
Hello, I've the following tabular formatted data: How can I achieve the following: Thanks in advance for your help. @ITWhisperer
Hello everyone! I have basic search index=main | stats list(src.port), list(dst.port) count(src.ip) as COUNT by id How can I apply mvdedup (or stats dc, don't know how better) to field dst.por... See more...
Hello everyone! I have basic search index=main | stats list(src.port), list(dst.port) count(src.ip) as COUNT by id How can I apply mvdedup (or stats dc, don't know how better) to field dst.port only if numbers of dst ports = field COUNT?
In the automated e-mails send for Splunk OnDemand entitlements, the link provided that's hyperlinked as "support portal" takes you to an outdated site: https://legacylogin.splunk.com/
Hello Are you okay? Can you help me, I'm trying to configure the Deployer to send the Apps to the SH's but I'm getting an error when executing the command: Command: ./splunk apply shcluster-bundle... See more...
Hello Are you okay? Can you help me, I'm trying to configure the Deployer to send the Apps to the SH's but I'm getting an error when executing the command: Command: ./splunk apply shcluster-bundle -action stage --answer-yes Error: Error in pre-deploy check, uri=?/services/shcluster/captain/kvstore-upgrade/status, status=502, error=Cannot resolve hostname Tks!
Hello, im getting this error while im trying to register a linux server client- ssl3_read_bytes:tlsv1 alert unknown ca - please check the output of the openssl verifycommand for the certificates in... See more...
Hello, im getting this error while im trying to register a linux server client- ssl3_read_bytes:tlsv1 alert unknown ca - please check the output of the openssl verifycommand for the certificates involved; note that if certificate verification is enabled (requireClientCert or sslVerifyServerCert set to true, the CA certificate and the server certificate should not have the same Common Name. Someone knows how to solve it?
Hi, I need a help in creating a daily csv export to a file from a data set for 24 hrs . I have a data set under Search & Reporting >>Datasets >>my dump report. now when i click on the my dump repor... See more...
Hi, I need a help in creating a daily csv export to a file from a data set for 24 hrs . I have a data set under Search & Reporting >>Datasets >>my dump report. now when i click on the my dump report it gives me report for any specific condition / time / period what ever i like and then i am able to download the same as csv file . I need help to create a daily automatic job of the same so that after every 24 hrs ( each day from 00:00:00 hrs to 23:59:00 hrs ) the report is created and saved / exported to a specific folder / disk with date_Month_year wise folder / file . Kindly help me / guide me for the same . i am trying on splunk installed on windows.
Hello. I'm trying to identify a pool of windows hosts by adding an additional field to the events they forward. I can do this by adding an inputs.conf in /Splunk_home/etc/system/local and this works.... See more...
Hello. I'm trying to identify a pool of windows hosts by adding an additional field to the events they forward. I can do this by adding an inputs.conf in /Splunk_home/etc/system/local and this works. My metadata field is called uf_deployment::remote_laptop(see below). [monitor://C:\Windows\System32\winevt\Logs\Application.evtx] index = my_index disabled = 0 sourcetype = XmlWinEventLog _meta = uf_deployment::remote_laptop However, the only way I can see how to do this is by monitoring a log file and using the [monitor] tag. But this presents a problem; I don't want to forward events from a log that I'm not interested in, nor do I want to duplicate events. I'm looking for a solution that will allow me to send the _meta = uf_deployment::remote_laptop field without having to "monitor" a log file. So far I have tried [default] with no success(see below) Any help is appreciated. Thank you. [default] index = my_index disabled = 0 sourcetype = XmlWinEventLog _meta = uf_deployment::remote_laptop
All: Recently we had an issue with our Oracle environment (11.2.0.4/Oracle Cloud Infrastructure) where the following RMAN query was running and taking the highest CPU time due to blocking/deadlock ... See more...
All: Recently we had an issue with our Oracle environment (11.2.0.4/Oracle Cloud Infrastructure) where the following RMAN query was running and taking the highest CPU time due to blocking/deadlock issues. select status, to_char(cast(start_time as timestamp with time zone),'yyyy-MM-dd"T"HH24:mi:ss.FF3TZH:TZM'), to_char(cast(end_time as timestamp with time zone),'yyyy-MM-dd"T"HH24:mi:ss.FF3TZH:TZM'), object_type, operation from v$rman_status where recid = (select max(recid) from v$rman_status where operation = 'BACKUP' and object_type like 'DB%') or recid = (select max(recid) from v$rman_status where operation = 'BACKUP' and object_type = 'ARCHIVELOG') As part of the investigation, we found that this query is getting initiated by the Splunk module using the sqlplus and running every minute. In order to successfully run the query, currently we have applied the sql profile with a better execution plan to avoid becoming a runaway query. I would like to recommend rewriting this query as follows in the source Splunk module to avoid becoming run away query select status, to_char(cast(start_time as timestamp with time zone),'yyyy-MM-dd"T"HH24:mi:ss.FF3TZH:TZM'), to_char(cast(end_time as timestamp with time zone),'yyyy-MM-dd"T"HH24:mi:ss.FF3TZH:TZM'), object_type, operation from v$rman_status where recid = (select max(recid) from v$rman_status where operation = 'BACKUP' and object_type like 'DB%') union select status, to_char(cast(start_time as timestamp with time zone),'yyyy-MM-dd"T"HH24:mi:ss.FF3TZH:TZM'), to_char(cast(end_time as timestamp with time zone),'yyyy-MM-dd"T"HH24:mi:ss.FF3TZH:TZM'), object_type, operation from v$rman_status where recid = (select max(recid) from v$rman_status where operation = 'BACKUP' and object_type = 'ARCHIVELOG') If you have any questions/concerns, please let me know. Thank you Ramesh Vasudevan
We upgraded to Splunk Enterprise v9.0.2 yesterday and have subsequently hit an issue with an integration with our ServiceNow platform for raising Cases and Incidents. We have a custom "Alert Action"... See more...
We upgraded to Splunk Enterprise v9.0.2 yesterday and have subsequently hit an issue with an integration with our ServiceNow platform for raising Cases and Incidents. We have a custom "Alert Action" which uses the python "requests" module. (Wondering if the removal of Python2 has had an impact on our previously working script and "import requests") Anyway, we seem to be getting SSL: UNKNOWN_PROTOCOL Just asking if anyone has had something similar happen. Pulling hair out to trying to make sense of the SSL configuration settings. We're issuing a log statement before and after the request.response and it is not reaching the "after", like a silent error. Any pointers would be appreciated.
I'm trying to use the streamstats-command with time_window to track when certain user actions happen more than twice in a span of an hour. My search is like this ("dedup _time" because we get duplica... See more...
I'm trying to use the streamstats-command with time_window to track when certain user actions happen more than twice in a span of an hour. My search is like this ("dedup _time" because we get duplicate rows) <search> | sort _time | fields _time user | dedup _time | streamstats time_window=60min count as amount by user | where amount > 2 | table _time user amount This search works correctly otherwise, but the problem is that it triggers multiple times when the user action happens more than twice in an hour. So I get results like: 2022-01-12 16:04:56.482 username1 3 2022-01-12 16:07:58.525 username1 4 2022-01-12 16:13:16.137 username1 5 2022-01-12 16:14:30.255 username1 6 So how can I get only the largest result (in this case 6) in a sequence like this? I can't use "dedup user" because the alert may trigger for the same user at some other time as well, and that should be reported as its own case. Any help is greatly appreciated.
Hi Team, We have SPlunk Cloud Victoria, We have 2 SH's (Core SH & ES SH) We have installed MS Cloud Service Add-on on Core SH and it is automatically reflecting on ES SH but we have configured inpu... See more...
Hi Team, We have SPlunk Cloud Victoria, We have 2 SH's (Core SH & ES SH) We have installed MS Cloud Service Add-on on Core SH and it is automatically reflecting on ES SH but we have configured input in this Add-on on Core SH but it is not reflecting on ES SH. 1. So Input(MSCS-Addon) configuration also reflecting in both SH's if we configured only on one SH's? 2. If not then we configured input(MSCS-Addon) on both SH's is it possible to get duplicate data?
Hello everyone I am using the DSDL app: https://splunkbase.splunk.com/app/4607 The model I use is sklearn's kmeans: https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html ... See more...
Hello everyone I am using the DSDL app: https://splunkbase.splunk.com/app/4607 The model I use is sklearn's kmeans: https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html My goal is to cluster a dataset with kmeans and then assign new observations to center points form my kmeans model. I built the model and everything compiled just fine. I then wanted to check my model for possible logical errors. To this end, I use the same dataset in the | fit and | apply command: | inputlookup wineventlog.csv | fit MLTKContainer algo=Pipeline_V3 k=5 fe_* into app:pipeline_v3 | inputlookup wineventlog.csv | apply pipeline_v3 To my understanding, this should yield the exact same results. However, it does not. | fit creates 5 clusters and assigns observations to each cluster. | apply on the other hand only assigns observations to 3 out of the 5 clusters. Does anyone have a precise idea what goes on behind the scenes with | fit and | apply? I already checked the documentation that outlines what they both do: https://docs.splunk.com/Documentation/MLApp/5.3.3/User/Understandfitandapply I thoroughly checked whether I have null values, non-numeric fields that might get converted, etc. but I could not figure out why fit & apply wouldn't yield the same result. Following the code I use for each respective command. def fit(model, df, param): # Number of clusters k = int(param["options"]["params"]["k"]) # Fit kmeans model kmeans = KMeans(n_clusters=k, random_state=0).fit(df) model["kmeans"] = kmeans return model def apply(model, df, param): # Assign new observations to kmeans centers predictions = model["kmeans"].predict(df) return predictions
Here's the weirdest piece of error I've ever seen. When I run the following code snippet I get a syntax error: line 1, column 0. But when I change the port to anything else, the code is running. ... See more...
Here's the weirdest piece of error I've ever seen. When I run the following code snippet I get a syntax error: line 1, column 0. But when I change the port to anything else, the code is running. Any help? import os import splunklib.client as client from dotenv import load_dotenv def connect_to_splunk(username, password, host='splunk.my_company.biz', scheme='https' try: service = client.connect(username=username, password=password, host=host, scheme=scheme, port='443') if service: print("Splunk service created successfully") except Exception as e: print(e)
index=XX sourcetype=YY source=*/log/abc.log | dedup _time, bppm_message, bppm_nodename sortby -_indextime | rex field=bppm_operations_annotations "0x[a-z0-9]{8},(?<onepass>\w+),,(OPERATOR|OVERRIDE)... See more...
index=XX sourcetype=YY source=*/log/abc.log | dedup _time, bppm_message, bppm_nodename sortby -_indextime | rex field=bppm_operations_annotations "0x[a-z0-9]{8},(?<onepass>\w+),,(OPERATOR|OVERRIDE)_CLOSED" | rex field=bppm_operations_annotations "(OPERATOR_CLOSED:|OVERRIDE_CLOSED:|OWNERSHIP_TAKEN:)\s+(?<closed_with>(\w|\s|\-|\/|\.|\[|\])+)," | rex field=bppm_annotations "0x[a-z0-9]{8},[a-zA-Z0-9]+,(?<INC_CREATED_TSIM>.*)" | rex field=bppm_annotations "[0-9]{3}[A-Z0-9]{5},[a-zA-Z0-9]+,(?<MUTIPLE_CLOSURE_ANNOTATION>.*)" | rex field=bppm_annotations "0x[a-z0-9]{8},CME-remedy.mrl:execute AddIncidentToNotes,Incident (?<INC_CREATED_TSIM_2>(\w|\s|\-|\/|\.|\[|\])+) created by" | eval bppm_nar_close_multiple_events=if(NOT match(bppm_operations_annotations,"OVERRIDE_CLOSED") AND (NOT match(bppm_operations_annotations,"OPERATOR_CLOSED")),"yes", "no") | eval closed_with = if(isnull(closed_with) OR closed_with="Null", INC_CREATED_TSIM, closed_with) | eval closed_with = if(isnull(closed_with) OR closed_with="Null", INC_CREATED_TSIM_2, closed_with) | eval closed_with = if(isnull(closed_with) OR closed_with="Null", MUTIPLE_CLOSURE_ANNOTATION, closed_with) | fillnull value="Null" closed_with | eval time=strftime(_time,"%x-%H:%M:%S") | lookup onepasslk onepass | search bppm_ecdb_env="***" | fillnull value="Null" | stats count(bppm_message) as Total_count, count(eval(like(closed_with, "%INC%"))) as Closed-With-INC, count(eval(like(closed_with, "%CRQ%"))) as Closed-With-CRQ, count(eval(like(closed_with, "%PKE%"))) as Closed-With-PKE, count(eval(like(closed_with, "%WO%"))) as Closed-With-WO by username | sort -Total_count Abv query iam able to pull data as shown in the attached image but i want add 3 more columns Unique INC Count , Unique CRQ Count , Unique WO Count which show distinct count pls help on how to achieve this
I want to list all the Kv store collections through SPL. something like below... | rest /servicesNS/-/- ....... unable to figure out the right rest command and the correct fields . Would appreci... See more...
I want to list all the Kv store collections through SPL. something like below... | rest /servicesNS/-/- ....... unable to figure out the right rest command and the correct fields . Would appreciate any guidance..
Hello everyone I'm new to Splunk . I'm trying to setup/intergrade GitHub Cloud logs into Splunk Cloud. I have managed to get the Splunk GitHub Add-On added to the Splunk IDM. Currently , I'm aw... See more...
Hello everyone I'm new to Splunk . I'm trying to setup/intergrade GitHub Cloud logs into Splunk Cloud. I have managed to get the Splunk GitHub Add-On added to the Splunk IDM. Currently , I'm awaiting for an account for GitHub created . Once , I have the account ,will proceed to setup the inputs. Anyone has done this before? Thanks & Regards Murali