All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

AppDynamics 21.5 version supports COCOA-Pods or not? If yes but COCOA-PODS version could not be for this SDK version. which to specify version need to mention in Pod file for AppD SDK version 21.5
Hello, I found a ton of eventtypes for the vmware agent module like AGENT_CONNECTED, AGENT_RECONNECTED, AGENT_SHUTDOWN, etc.  I can't find one for AGENT_UNREACHABLE though.  I'm hoping to trigger a... See more...
Hello, I found a ton of eventtypes for the vmware agent module like AGENT_CONNECTED, AGENT_RECONNECTED, AGENT_SHUTDOWN, etc.  I can't find one for AGENT_UNREACHABLE though.  I'm hoping to trigger an alert through splunk based on that eventtype.  I can't find it in any vmware documentation and can't seem to find anyone asking the question.  I can't be the only one Is there an AGENT_UNREACHABLE eventtype or is there even a different way I can extract that piece from another event? Example:         index=* Module=Agent AGENT_CONNECTED index=* Module=Agent AGENT_UNREACHABLE          
I have a log file with a unique identifier (requestid) for a sequence of events. I want to show a breakup of all events within the requestid. I plan to show that by "marking" the start and stop logs ... See more...
I have a log file with a unique identifier (requestid) for a sequence of events. I want to show a breakup of all events within the requestid. I plan to show that by "marking" the start and stop logs of different events (based on the specific log message) I plan to track and finally create some table like this: 06/14/22 12:35:03.022 requestid=1requestid1 started 06/14/22 12:36:03.022 requestid=1 Event1 started 06/14/22 1237:03.022 requestid=1Event2 started 06/14/22 12:38:03.022 requestid=1 Event2 ended 06/14/22 12:39:03.022 requestid=1 Event1 ended 06/14/22 12:40:03.022 requestid=1requestid1 ended Event      | Start Time                              | Duration ------------------------------------ Event1| 06/14/22 12:36:03.022.     |  180 Event2| 06/14/22 12:37:03.022.     |  60 The timeseries will be across the duration of the requestid transaction of 5 mins. Could you let me know how this can be achieved? Thanks!
I'm trying to change the color of a row in a table based on the value of a field 'action'. If it's equal to "allowed", I want the row to be green. If it's equal to "blocked", I want it to be red.  ... See more...
I'm trying to change the color of a row in a table based on the value of a field 'action'. If it's equal to "allowed", I want the row to be green. If it's equal to "blocked", I want it to be red.  How can this be done in the code? I'm not seeing much documentation online for manipulating context.  Thanks in advance and appreciate the help.
Hi, am working on a lookup in a lookup. i have the following search: index=* source="*WinEventLog:Security" EventCode=4688 [| inputlookup attacktoolsh.csv WHERE discovery_or_attack=attack | stats... See more...
Hi, am working on a lookup in a lookup. i have the following search: index=* source="*WinEventLog:Security" EventCode=4688 [| inputlookup attacktoolsh.csv WHERE discovery_or_attack=attack | stats values(filename) as search | format] | transaction host maxpause=10m | where eventcount>=5 | fields - _raw closed_txn field_match_sum linecount |table ComputerName, New_Process_Name, Process_Command_Line, _time, eventcount This works fine, the lookup attactoolsh.csv has the tools, an i have a hit on a client. now i would like to intergrate a second lookup file in the search that looks a file with a computername/username in it, that if the search hits on attacktoolsh.csv it looks in the second file and if a computer/user is in that file the search should not produce a notable.  in short, computer A is running "nmap" this is allowed on computer A and Computer A is in the second file. Computer B is running "nmap" and is not allowed to run this, so produce a notable / warning. anybody an idea how to intergrate this toghter. Thanks.
I'm trying to change the font size of a table in a dashboard studio visualization. How is this done in the code? I've tried a few ways but having no luck. Thanks in advance and I appreciate the he... See more...
I'm trying to change the font size of a table in a dashboard studio visualization. How is this done in the code? I've tried a few ways but having no luck. Thanks in advance and I appreciate the help.
 AL9851 | Z1 | [https://example1.com/] recording played asia location is Down AL9851 | Z1 | [http://alphabeta/] recording played from asia location is Down AL9851 | Z1 | [http://alphabeta/] recordi... See more...
 AL9851 | Z1 | [https://example1.com/] recording played asia location is Down AL9851 | Z1 | [http://alphabeta/] recording played from asia location is Down AL9851 | Z1 | [http://alphabeta/] recording played from US location is Down   i have above log from that need to extract URL .as URL varies but content is same before and after URL .  
I need to find number of events that start with certain conditions and ends with certain condition .  example  index="*" source="*" | transacton startWith=C OR D endWith=A OR B  Need to find co... See more...
I need to find number of events that start with certain conditions and ends with certain condition .  example  index="*" source="*" | transacton startWith=C OR D endWith=A OR B  Need to find count ..  How to do it ?
I want to add a few rex statements to my existing search based on the token being set. Please see example below.  ex: | regex _raw="$token1$" if($token2$){ | regex _raw!="abc" | regex _raw!="xyz... See more...
I want to add a few rex statements to my existing search based on the token being set. Please see example below.  ex: | regex _raw="$token1$" if($token2$){ | regex _raw!="abc" | regex _raw!="xyz" } Please let me know if I can achieve this in some other way. Thanks!
I am running something like the following.       | bin _time span=1s | stats count by fuzz       When doing this though I do get gaps where there is no result for some second time fra... See more...
I am running something like the following.       | bin _time span=1s | stats count by fuzz       When doing this though I do get gaps where there is no result for some second time frames.  I do need per second data but when doing this I feel I am getting some false data since it is not accounting for the missing seconds. Essentially I want to see how many transactions a second we are posting to specific servers
Hi everyone, My team is asking if it would be possible to have a single dashboard panel to link to different dashboards to the matching report similar to the "Navigation Menu" since it might make it... See more...
Hi everyone, My team is asking if it would be possible to have a single dashboard panel to link to different dashboards to the matching report similar to the "Navigation Menu" since it might make it more user friendly to give public access for that dashboard via a hyperlink. I uploaded the names of the report via a lookup CSV table. Was considering of placing it there but was unsure if that would work and would rather have it as a click on the report kind of option for the business users. See screenshot for reference. Is there a way to do it using XML or HTML? | inputlookup capcaity_report_titlte.csv      
When I add all the details required on Splunk add on for office 365, I click add and then get the following error: Screenshot is attached   Regards, Faisal  
Hi, Could you please let me know How to split data to multiple indexes on the same indexer (index1,index2) from one input source from one Heavyforwarder   I tried with the following configuration ... See more...
Hi, Could you please let me know How to split data to multiple indexes on the same indexer (index1,index2) from one input source from one Heavyforwarder   I tried with the following configuration but all the data going to once index that is defined in inputs.conf. If I remove index from inputs.conf all the events are going to main index.     Thank you in ad   Here my configuration and data:   INPUTS.CONF ====== [monitor:///opt/splunk/var/log/tesData] disabled = false host = heaveforwarder1   PROPS.CONF =========== [source::///opt/splunk/var/log/tesData] TRANSFORMS-routing=vendorData,secureData   TRANSFORMS.conf ========== [vendorData] REGEX=5617605039838520 DEST_KEY=_MetaData:Index FORMAT=index1 [secureData] REGEX=6794850084423218 DEST_KEY=_MetaData:Index FORMAT=index2  testdata: [08/June/2022:18:23:07] VendorID=5038 Code=C AcctID=5617605039838520 [08/June/2022:18:23:22] VendorID=9109 Code=A AcctID=6794850084423218      
Windows based DNS, does anyone know of a few search examples i could utilize to look up DNS entries Like a A record please?
  index=_internal source=*metrics.log | eval MB=round(kb/1024,2) | search group="per_sourcetype_thruput" | stats sum(MB) by series | eval sourcetype=lower(series) | table index sourcetype "sum(... See more...
  index=_internal source=*metrics.log | eval MB=round(kb/1024,2) | search group="per_sourcetype_thruput" | stats sum(MB) by series | eval sourcetype=lower(series) | table index sourcetype "sum(MB)" | append [| tstats latest(_time) as latest where index=* earliest=-24h by sourcetype |eval LastReceivedEventTime = strftime(latest,"%c") |table index, sourcetype LastReceivedEventTime | eval sourcetype=lower(sourcetype)] | stats values(*) as * by sourcetype | where LastReceivedEventTime != ""     Above query giving me sourtype, latest time stamp and sum(MB), but unable to get index, can someone please help
Hey all, I'm looking for some advice. We currently have multiple ASAs which are sending logs to rsyslog. The logs are stored in folders based on the hostname. I currently have the universal forward... See more...
Hey all, I'm looking for some advice. We currently have multiple ASAs which are sending logs to rsyslog. The logs are stored in folders based on the hostname. I currently have the universal forwarder monitoring the parent folder. When it comes to using Splunk addons such as "Firegen Log Analyzer for Cisco ASA". It's asking to specify an index. Do I need to compile all the logs from the ASAs into a single directory and then create an index for it? Both rsyslog and splunk on are linux (ubuntu) hosts. Any help would be appreciated. Thanks, Will      
Hello everyone. I have set up a cluster of 3 search heads, I have the Serach Head 1 configured as captain, but it turns out that there are times that I do not see events in that same SH1 and SH2, t... See more...
Hello everyone. I have set up a cluster of 3 search heads, I have the Serach Head 1 configured as captain, but it turns out that there are times that I do not see events in that same SH1 and SH2, this causes alerts that I have configured in my SH1 to activate, since they do not events are displayed on the SH1. What I have to do is change how SH3 is populated and the display of events is restored, temporarily solving the problem. After a while I find out that SH1 takes the role of captain again and again I can't view events on SH1. Why could it be happening? Regards.  
I am looking to define globally all of the 'knowledge objects' within a search head. Where is the URL found within Settings? Or is there a different search that would provide the URL? I want to im... See more...
I am looking to define globally all of the 'knowledge objects' within a search head. Where is the URL found within Settings? Or is there a different search that would provide the URL? I want to implement the following search, but need the URL and have not found it as of yet. | rest <URL goes here> splunk_server=local count=0 | rename eai:* as *, acl.* as * | eval updated=strptime(updated,"%Y-%m-%dT%H:%M:%S%Z"), updated=if(isnull(updated),"Never",strftime(updated,"%d %b %Y")) | sort type | stats list(title) as title, list(type) as type, list(orphaned) as orphaned, list(sharing) as sharing, list(owner) as owner, list(updated) as updated by app  
When the syslog daemon writes to the syslog file, what is the time stamp it writes? is it the host date/time or the event date/time? We quite often use the following in the syslog config -  tem... See more...
When the syslog daemon writes to the syslog file, what is the time stamp it writes? is it the host date/time or the event date/time? We quite often use the following in the syslog config -  template("${DATE} ${MSGHDR}${MSG}\n");      
I have connected my blob storage to splunk the files are uploading to the index but the csv format is not working, each line is a different result with no header extraction to every result I have... See more...
I have connected my blob storage to splunk the files are uploading to the index but the csv format is not working, each line is a different result with no header extraction to every result I have tried Sourcetype: mscs:storage:blob:csv and csv but it didn't work when I try to upload the csv as a data input than splunk recognize the csv format and extract the headers correctly. what am I missing? are there more changes needed to extract the fields correctly?