All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Awesome, thanks for the help, much appreciated. This worked for me. Thanks again, Tom  
Hi, I tried to build Splunk environment with 1 SH and indexer cluster with 2 pears + manager node. When I go to Monitoring console -> Settings -> General Setup it shows me only my SH and pears withou... See more...
Hi, I tried to build Splunk environment with 1 SH and indexer cluster with 2 pears + manager node. When I go to Monitoring console -> Settings -> General Setup it shows me only my SH and pears without manager node But when I go to Distributed environment I can see my indexer manager configured I did something wrong or it should not be displayed in General Setup menu?
index="intau" host="server1" sourcetype="services_status.out.log" service="HTTP/1.1" status=* | eval status=if(status=502,200,status) | chart count by status
I try to import into the Observability platform, but I fail to follow your documentation. This page, https://docs.splunk.com/observability/en/admin/authentication/authentication-tokens/org-tokens.ht... See more...
I try to import into the Observability platform, but I fail to follow your documentation. This page, https://docs.splunk.com/observability/en/admin/authentication/authentication-tokens/org-tokens.html#admin-org-tokens, says Settings - Access Tokens exists, but it doesn't. (My home page https://prd-p-a9b9x.splunkcloud.com/en-US/manager/splunk_app_for_splunk_o11y_cloud/authentication/users). Settings - Tokens exists, but it doesn't create tokens with scopes. I don't know if that's a documentation error or an application error. I then tried running the code at https://docs.splunk.com/observability/en/gdi/other-ingestion-methods/rest-APIs-for-datapoints.html#start-sending-data-using-the-api, which says I need a realm. And a realm can be found at "your profile page in the user interface". But it's not in User Settings and it's not in Settings - User Interface. Your documentation doesn't seem to match your application. Am I on the wrong page, or your docs years out of date? Please help.
Yes, that's exactly what that is for. Still, consider what @gcusello already said - multiplying indexes is not always a good practice. There are different mechanisms for data "separation" depending o... See more...
Yes, that's exactly what that is for. Still, consider what @gcusello already said - multiplying indexes is not always a good practice. There are different mechanisms for data "separation" depending on your use case. Unless you need - different access permissions - different retention period or you have significantly different data characteristics (cardinatility, volume and "sparsity") you should leave the data in the same index and limit your searches by adding conditions.
What I meant by "dynamic" is that the value for index should be what regex finds and uses it for FORMAT. I know I can use static value but wanted to confirm it that is something possible using regex ... See more...
What I meant by "dynamic" is that the value for index should be what regex finds and uses it for FORMAT. I know I can use static value but wanted to confirm it that is something possible using regex to dynamically use correct index which is part to Source. Example of sources : phone-1234 , tablet-23456, pc-45623, pc-79954 [new_index] SOURCE_KEY = MetaData:Source REGEX = (\w+)\-\d+  FORMAT = $1                                       #This needs be either phone, tablet, pc etc. and don't want to make static DEST_KEY = _MetaData:Index WRITE_META = true
Hi KendallW,    This is the search:  index=_internal (host=`sim_indexer_url` OR host=`sim_si_url`) sourcetype=splunkd group=per_Index_thruput series!=_* | timechart minspan=30s per_second(kb) a... See more...
Hi KendallW,    This is the search:  index=_internal (host=`sim_indexer_url` OR host=`sim_si_url`) sourcetype=splunkd group=per_Index_thruput series!=_* | timechart minspan=30s per_second(kb) as kb by series   Then I selected 30 days on the time picker.  Also selected visualization. I have attached another screenshot. I hope it helps.  
I managed to solve it by looking at splunk doc and noticing i was using the wrong flags # configuring Splunk   msiexec.exe /i "C:\Installs\SplunkInstallation\splunkforwarder-9.2.0.1-d8ae995bf2... See more...
I managed to solve it by looking at splunk doc and noticing i was using the wrong flags # configuring Splunk   msiexec.exe /i "C:\Installs\SplunkInstallation\splunkforwarder-9.2.0.1-d8ae995bf219-x64-release.msi" SPLUNKUSERNAME=admin SPLUNKPASSWORD=**** DEPLOYMENT_SERVER="********:8089" AGREETOLICENSE=Yes /quiet
index="intau" host="server1" sourcetype="services_status.out.log" service="HTTP/1.1" status=* | chart count by status | eventstats sum(count) as total | eval percent=100*count/total | eval percent=ro... See more...
index="intau" host="server1" sourcetype="services_status.out.log" service="HTTP/1.1" status=* | chart count by status | eventstats sum(count) as total | eval percent=100*count/total | eval percent=round(percent,2) | eval SLO =if( status="200","99,9%","0,1%") | where NOT (date_wday=="saturday" AND date_hour >= 8 AND date_hour < 11) | fields - total count   I have the above Query and the above result , how can i combine 502 and 200 results to show our availability excluding maintenance time of 8pm to 10pm every Saturday, how can i make it look like the drawing I produced there
Thanks, I think https://docs.splunk.com/Documentation/Splunk/9.2.1/InheritedDeployment/Ports is the one included?
Hi @irisk , did you tried to use INDEXED_EXTRACTIONS = json in your sourcetype? Otherwise, did you already tried with spath command (https://docs.splunk.com/Documentation/Splunk/9.2.1/SearchReferen... See more...
Hi @irisk , did you tried to use INDEXED_EXTRACTIONS = json in your sourcetype? Otherwise, did you already tried with spath command (https://docs.splunk.com/Documentation/Splunk/9.2.1/SearchReference/Spath )? Ciao. Giuseppe
Hello,  I receive an event of the following format: { log: { 'trace_id': 'abc', 'request_time': '2024-06-04 10:49:56.470140', 'log_type': 'DEBUG', 'message': 'hello'} } Is it possible to extract f... See more...
Hello,  I receive an event of the following format: { log: { 'trace_id': 'abc', 'request_time': '2024-06-04 10:49:56.470140', 'log_type': 'DEBUG', 'message': 'hello'} } Is it possible to extract from all the events I receive the inner JSON? * each key in the inner json will be a column value but the me
After few tries, i changed test case and saw the problem was that is was asking splunk to save an event "in the future", and apparently that's not possibile
SHC - Search Head Cluster. You can use openssl to look validity of cert. There are lot of examples on net, how this can do.
What ever you have mentioned thats correct, only for one log we are facing this issue, for others show source is loading fine. still getting after truncating : Failed to find target event in final ... See more...
What ever you have mentioned thats correct, only for one log we are facing this issue, for others show source is loading fine. still getting after truncating : Failed to find target event in final sorted event list. Cannot properly prune results  
Unfortunately, as you're introducing an additional external component (cribl worker), it's hard to say what happens where. BTW, it's probably not that cribl merges anything, more like it doesn't spl... See more...
Unfortunately, as you're introducing an additional external component (cribl worker), it's hard to say what happens where. BTW, it's probably not that cribl merges anything, more like it doesn't split the events properly since UF sends data in chunks, not single events. So the one at fault here is most probably the cribl one. BTW, why don't you just send UF->Indexer (or UF->HF->Indexer)?
is anybody can help please?
I have a small query that splits events depending on a multivalue field and each of n's date from the multivalue needs to become the _time of n's "collected" row.   index=test source=test | eval fo... See more...
I have a small query that splits events depending on a multivalue field and each of n's date from the multivalue needs to become the _time of n's "collected" row.   index=test source=test | eval fooDates=coalesce(fooDates, foo2), fooTrip=mvsort(mvdedup(split(fooDates, ", "))), fooCount=mvcount(fooTrip), fooValue=fooValue/fooCount | mvexpand fooTrip | fields - _raw | eval _time=strptime(fooTrip, "%F") | table _time VARIOUS FIELDS | collect index=test source="fooTest" addtime=true   The ouput table view is exactly what i'm expecting, but when i search for these fields on new source, they have today time (or, with addtime=false, earliest from the time picker). Also using testmode=true, i still see results as supposed to be. What's wrong? Thanks 
Thank you for your answer deepakc, but that is not correct. I do not want to have a simple KPI Dashboard. Each detailed (sub) dashboard, has custom query's which I don't want to run automatically t... See more...
Thank you for your answer deepakc, but that is not correct. I do not want to have a simple KPI Dashboard. Each detailed (sub) dashboard, has custom query's which I don't want to run automatically twice, once in the detailed board and once on the summary board. Maybe an simple example makes my question more clear: App1-Dashboard: - 10 different custom query's which will show 10 different traffic light style of indication App2-Dashboard: - 50 different custom query's which will show 50 different traffic light style of indication App3-Dashboard: - 15 different custom query's which will show 15 different traffic light style of indication The logs are not simply evaluated based on log-level, rather based on specific string combinations. Instead of looking to each of my three dashboards one by one, I would like to have a "Summary Dashboard" which only includes three traffic lights. One for each mentioned app above. If e.g. App2-Dashboard has one of 50 traffic light warnings, I would like to see the traffic light of App2 in my "Summary Dashboard" indicate yellow or red to make sure I'm aware of any problem in App2. I do not want to have all custom query's run in the Summary Dashboard and on each App Dashboard.