All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I'm collecting all other logs ie. wineventlogs, splunkd logs the inputs.conf is accurate the splunk user has full access to the file   What are some non-splunk reasons that would prevent a file... See more...
I'm collecting all other logs ie. wineventlogs, splunkd logs the inputs.conf is accurate the splunk user has full access to the file   What are some non-splunk reasons that would prevent a file from being monitored?
if i had to write a document for myself on basic learning of splunk: to create a dashboard i can either use inputs like index,source,source fields or I can give a data set is that right? for that can... See more...
if i had to write a document for myself on basic learning of splunk: to create a dashboard i can either use inputs like index,source,source fields or I can give a data set is that right? for that can i write it like this or am i wrong with side headings: Understanding of input data:  Explore different methods of data input into Splunk, such as ingesting data from files, network ports, or APIs. Understanding of Data domains : Discover how to efficiently structure your data in Splunk using data models to drive analysis.
If I have the following table using columns DATE and USAGE, is there a way to create a 3rd column to display the difference in the USAGE column from Day X to previous day?   DATE USAGE Feb-28 ... See more...
If I have the following table using columns DATE and USAGE, is there a way to create a 3rd column to display the difference in the USAGE column from Day X to previous day?   DATE USAGE Feb-28 Feb-29 Mar-01 Mar-02 Mar-03 Mar-04 Mar-05 Mar-06 Mar-07 Mar-08 Mar-09 Mar-10 Mar-11 Mar-12 17.68 gb 18.53 gb 19.66 gb 21.09 gb 22.04 gb 23.21 gb 24.20 gb 24.94 gb 25.64 gb 26.80 gb 27.79 gb 29.07 gb 30.09 gb 31.01 gb
I'm running stats to find out which events I want to delete. Basically I'm finding the minimum "change_set" a particular "source" has. Now, I want to delete these sources with the least "change_set".... See more...
I'm running stats to find out which events I want to delete. Basically I'm finding the minimum "change_set" a particular "source" has. Now, I want to delete these sources with the least "change_set".   Note: the events also has a lot of attributes apart from change_set and source.   index=test | stats min(change_set) by source   (Now delete the events which has that source and change_set. ```   How can I write the delete operation with this query? (the most optimal way) @gcusello @ITWhisperer @scelikok @PickleRick    Thanks
Good afternoon everyone, I need your help in this way. I have a stats sum with the wild card * |appendpipe [stats sum(*) as * by Number | eval UserName="Total By Number: "] and I need to format w... See more...
Good afternoon everyone, I need your help in this way. I have a stats sum with the wild card * |appendpipe [stats sum(*) as * by Number | eval UserName="Total By Number: "] and I need to format with commas the sum(*) as *. How I can do that? Thank you
We are using the Android agent for AppDynamics and R8 to obfuscate the code. The corresponding mapping file was uploaded, AppDynamics recognizes it and "deobfuscates" the stacktrace, but in the end b... See more...
We are using the Android agent for AppDynamics and R8 to obfuscate the code. The corresponding mapping file was uploaded, AppDynamics recognizes it and "deobfuscates" the stacktrace, but in the end both versions are identical and include obfuscated method names. This happened with multiple app releases and crashes. Locally I am perfectly able to retrace the stacktrace provided by AppDynamics with the uploaded mapping file.  Does someone have an idea what may be the reason for this? AppDynamics Gradle Plugin version: 23.6.0 Android Gradle Plugin version: 8.1.4
index="abc" aws_appcode="123" logGroup="watch" region="us-east-1" (cwmessage.message = "*Notification(REQUESTED)*") |stats latest(_time) as start_time by cwmessage.transId |join cwmessage.transId [se... See more...
index="abc" aws_appcode="123" logGroup="watch" region="us-east-1" (cwmessage.message = "*Notification(REQUESTED)*") |stats latest(_time) as start_time by cwmessage.transId |join cwmessage.transId [search index="abc" aws_appcode="123" logGroup="watch" region="us-east-1" (cwmessage.message = "*Notification(COMPLETED)*") |stats latest(_time) as cdx_time by cwmessage.transId ] [search index="abc" aws_appcode="123" logGroup="watch" region="us-east-1" (cwmessage.message = "*Notification(UPDATeD)*") |stats latest(_time) as upd_time by cwmessage.transId ] |join cwmessage.transId |eval cdx=cdx_time-start_time, upd=upd_time-cdx_time |table cwmessage.transId, cdx,upd From above query I'm using index query in multiple times, i want to use it as base search and call that in all nested searches for the dashboard. Please help me. Thanks
Hi Everyone, I am trying to replicate log modification that was possible with fluentd when using splunk-connect-for-kubernetes.       splunk_kubernetes_logging: cleanAuthtoken: tag... See more...
Hi Everyone, I am trying to replicate log modification that was possible with fluentd when using splunk-connect-for-kubernetes.       splunk_kubernetes_logging: cleanAuthtoken: tag: 'tail.containers.**' type: 'record_modifier' body: | # replace key log <replace> key log expression /"traffic_http_auth".*?:.*?".+?"/ # replace string replace "\"traffic_http_auth\": \"auth cleared\"" </replace>       Now since the above charts support ended we have switched to splunk-otel-collector. Along with this we also switched the logsengine: otel  and now having a hard time replicating this modification. Per the documentation I read this should come via processors (which is the agent), please correct me if I am wrong here. I have tried two processors but both doesn't work.  What I am missing here?     logsengine: otel agent: enabled: true config: processors: attributes/log_body_regexp: actions: - key: traffic_http_auth action: update value: "obfuscated" transform: log_statements: - context: log statements: - set(traffic_http_auth, "REDACTED")       This is new to me, can anyone point me where this logs modifiers can be applied.  Thanks, Ppal      
When I do an stats count my field it return the double of the real number index=raw_fe5_autsust Aplicacao=HUB Endpoint="*/" | eval Agrupamento=if(Agrupamento!="", Agrupamento, "AGRUPAMENTO_HOLD... See more...
When I do an stats count my field it return the double of the real number index=raw_fe5_autsust Aplicacao=HUB Endpoint="*/" | eval Agrupamento=if(Agrupamento!="", Agrupamento, "AGRUPAMENTO_HOLDING/CE") | eval Timestamp=strftime(_time, "%Y-%m-%d") | stats count by Agrupamento, Timestamp | sort -Timestamp I already tried dedup and when I count only by Timestamp it works fine
hey guys did someone ever happed to come through this problem. I'm using Splunk Cloud  I'm trying to extract a new field using regex but the data are under the source filed  | rex field=source "S... See more...
hey guys did someone ever happed to come through this problem. I'm using Splunk Cloud  I'm trying to extract a new field using regex but the data are under the source filed  | rex field=source "Snowflake\/(?<folder>[^\/]+)" this is the regex I'm using when i use it in the search it works perfect. but the main goal is to save this search as a permanent field. i know that the the field extraction draw from the "_raw" there is an option to direct the Cloud to pull from the source and save it a permanent field.
HI, I would like to know how can I create a new filter by field like "slack channel name" / phantom artifact id?  how its been created?  thanks!  
Hello, Can anyone assist in determining why my splunk instance ingest large amounts of data ONLY on the weekends?  This appears to be across the board for all hosts as near as I can tell.   I run t... See more...
Hello, Can anyone assist in determining why my splunk instance ingest large amounts of data ONLY on the weekends?  This appears to be across the board for all hosts as near as I can tell.   I run this command: index=_internal metrics kb series!=_* "group=per_host_thruput" earliest=-30d | eval mb = kb / 1024 | timechart fixedrange=t span=1d sum(mb) by series and it shows the daily ingest for numerous forwarders.  During the week it averages out but over the weekend it exceeds my daily ingest limit causing warnings.  I would like to be able to find out what the cause is and a possible solution so I can even out the ingestion so I dont get violations.   Much appreciated for any assistance!
Hi All, our SVC calculation is in _introspection and and our search name is in _internal and _audit. We need a common filed to map those together so we can tie an SVC (and dollar amount) to a partic... See more...
Hi All, our SVC calculation is in _introspection and and our search name is in _internal and _audit. We need a common filed to map those together so we can tie an SVC (and dollar amount) to a particular search. We tried doing it using the SID but that is not matching.  Can someone help me out here based on your experiences.
Hi, I know as part of SPL-212687 this issue was fixed in 8.2.7 and 9.0+ however we have had some hosts drop their defender logs after receiving a Windows Defender update. These UFs are on version 9.0... See more...
Hi, I know as part of SPL-212687 this issue was fixed in 8.2.7 and 9.0+ however we have had some hosts drop their defender logs after receiving a Windows Defender update. These UFs are on version 9.0.2 but have still reported this issue. Is there any known problem that would cause this?
I want to call lookup within case statement. if possible, please share sample query.
Need help with the extraction of an alpha numeric field. E.G. : ea37c31d-f4df-48ab-b0b7-276ade5c5312
Hi ,  I have two searches joined using join command. The first search i need to run earliest=-60mins and the second search is using summary index here i need to fetch all the results in summary inde... See more...
Hi ,  I have two searches joined using join command. The first search i need to run earliest=-60mins and the second search is using summary index here i need to fetch all the results in summary index so I need to check and run summary index for "all time" . How can this be done? I am giving earliest=-60min in my first search and time range as "all time" while scheduling the report consisting of this two searches but it is not working. I have not used any time in my summary index. Search to populate my summary index index=testapp sourcetype=test_appresourceowners earliest=-24h latest=now | table sys_id, dv_manager, dv_syncenabled, dv_resource, dv_recordactive | collect addtime=false index=summaryindex source=testapp. my scheduled report search  index=index1 earlies=-60m | join host [| search index=summaryindex earliest="alltime"] | tablehost field1 field2
Referring to the below inputs.conf for one of my windows server , as you can see, there is some whitespace at the end of the first line before the closing bracket. The "folderA" is the folder where ... See more...
Referring to the below inputs.conf for one of my windows server , as you can see, there is some whitespace at the end of the first line before the closing bracket. The "folderA" is the folder where the contents, splunk should be ingesting but is not (there are multiple log files inside). Is there a possibility that because of this whitespace Splunk may not be ingesting the logs? And if yes, any explanation on this so that we can explain/advise to the client. " [monitor://C:\Program Files\somepath\folderA ] index=someindex sourcetype=somesourcetype "
I'm trying to migrate kvstore on a v8.2 installation on Windows, but it fails early in the process. splunk migrate kvstore-storage-engine --target-engine wiredTiger ERROR: Cannot get the size of ... See more...
I'm trying to migrate kvstore on a v8.2 installation on Windows, but it fails early in the process. splunk migrate kvstore-storage-engine --target-engine wiredTiger ERROR: Cannot get the size of the KVStore folder at=E:\Splunk\Indexes\kvstore\mongo, due to reason=3 errors occurred. Description for first 3: [{operation:"failed to stat file", error:"Access is denied.", file:"E:\Splunk\Indexes\kvstore\mongo"}, {operation:"failed to stat file", error:"Access is denied.", file:"E:\Splunk\Indexes\kvstore\mongo"}, {operation:"failed to stat file", error:"Access is denied.", file:"E:\Splunk\Indexes\kvstore\mongo"}]  I've tried to do file operations on the folder and subfolders of E:\spunk\indexes\kvstore\mongo and everything seems ok. The mongod.log does not contain any rows from the migration. Any nudges in the right direction? Can I upgrade to 9.1 without migrating the store?  
<panel> <html> <div class="modal $tokShowModel$" id="myModal" style="border-top-left-radius:25px; border-top-right-radius:25px;"> <div class="modal-header" style="background:#e1e6eb; padding:2... See more...
<panel> <html> <div class="modal $tokShowModel$" id="myModal" style="border-top-left-radius:25px; border-top-right-radius:25px;"> <div class="modal-header" style="background:#e1e6eb; padding:20px; height:10px;"> <h3>Message:</h3> </div> <div class="modal-body" style="padding:30%"> <p style="color:blue;font-size:16px;"> This dashboard has been moved to azure. Kindly visit the following link - <a href="https://mse-svsplunkm01.emea.duerr.int:8000/en-US/app/GermanISMS/kpi_global_v5">Go here </a> </p> </div> </div> </html> </panel>   I have created this code which displays the pop up but the splunk dashboard is still working on the background ...Can anyone please help  me with an  idea about any script that I can add in this code to make the dashboard stop working in the background as well as the pop up should also display...