All Topics

Top

All Topics

Hi, am working on a lookup in a lookup. i have the following search: index=* source="*WinEventLog:Security" EventCode=4688 [| inputlookup attacktoolsh.csv WHERE discovery_or_attack=attack | stats... See more...
Hi, am working on a lookup in a lookup. i have the following search: index=* source="*WinEventLog:Security" EventCode=4688 [| inputlookup attacktoolsh.csv WHERE discovery_or_attack=attack | stats values(filename) as search | format] | transaction host maxpause=10m | where eventcount>=5 | fields - _raw closed_txn field_match_sum linecount |table ComputerName, New_Process_Name, Process_Command_Line, _time, eventcount This works fine, the lookup attactoolsh.csv has the tools, an i have a hit on a client. now i would like to intergrate a second lookup file in the search that looks a file with a computername/username in it, that if the search hits on attacktoolsh.csv it looks in the second file and if a computer/user is in that file the search should not produce a notable.  in short, computer A is running "nmap" this is allowed on computer A and Computer A is in the second file. Computer B is running "nmap" and is not allowed to run this, so produce a notable / warning. anybody an idea how to intergrate this toghter. Thanks.
I'm trying to change the font size of a table in a dashboard studio visualization. How is this done in the code? I've tried a few ways but having no luck. Thanks in advance and I appreciate the he... See more...
I'm trying to change the font size of a table in a dashboard studio visualization. How is this done in the code? I've tried a few ways but having no luck. Thanks in advance and I appreciate the help.
 AL9851 | Z1 | [https://example1.com/] recording played asia location is Down AL9851 | Z1 | [http://alphabeta/] recording played from asia location is Down AL9851 | Z1 | [http://alphabeta/] recordi... See more...
 AL9851 | Z1 | [https://example1.com/] recording played asia location is Down AL9851 | Z1 | [http://alphabeta/] recording played from asia location is Down AL9851 | Z1 | [http://alphabeta/] recording played from US location is Down   i have above log from that need to extract URL .as URL varies but content is same before and after URL .  
I need to find number of events that start with certain conditions and ends with certain condition .  example  index="*" source="*" | transacton startWith=C OR D endWith=A OR B  Need to find co... See more...
I need to find number of events that start with certain conditions and ends with certain condition .  example  index="*" source="*" | transacton startWith=C OR D endWith=A OR B  Need to find count ..  How to do it ?
I want to add a few rex statements to my existing search based on the token being set. Please see example below.  ex: | regex _raw="$token1$" if($token2$){ | regex _raw!="abc" | regex _raw!="xyz... See more...
I want to add a few rex statements to my existing search based on the token being set. Please see example below.  ex: | regex _raw="$token1$" if($token2$){ | regex _raw!="abc" | regex _raw!="xyz" } Please let me know if I can achieve this in some other way. Thanks!
I am running something like the following.       | bin _time span=1s | stats count by fuzz       When doing this though I do get gaps where there is no result for some second time fra... See more...
I am running something like the following.       | bin _time span=1s | stats count by fuzz       When doing this though I do get gaps where there is no result for some second time frames.  I do need per second data but when doing this I feel I am getting some false data since it is not accounting for the missing seconds. Essentially I want to see how many transactions a second we are posting to specific servers
Hi everyone, My team is asking if it would be possible to have a single dashboard panel to link to different dashboards to the matching report similar to the "Navigation Menu" since it might make it... See more...
Hi everyone, My team is asking if it would be possible to have a single dashboard panel to link to different dashboards to the matching report similar to the "Navigation Menu" since it might make it more user friendly to give public access for that dashboard via a hyperlink. I uploaded the names of the report via a lookup CSV table. Was considering of placing it there but was unsure if that would work and would rather have it as a click on the report kind of option for the business users. See screenshot for reference. Is there a way to do it using XML or HTML? | inputlookup capcaity_report_titlte.csv      
When I add all the details required on Splunk add on for office 365, I click add and then get the following error: Screenshot is attached   Regards, Faisal  
Hi, Could you please let me know How to split data to multiple indexes on the same indexer (index1,index2) from one input source from one Heavyforwarder   I tried with the following configuration ... See more...
Hi, Could you please let me know How to split data to multiple indexes on the same indexer (index1,index2) from one input source from one Heavyforwarder   I tried with the following configuration but all the data going to once index that is defined in inputs.conf. If I remove index from inputs.conf all the events are going to main index.     Thank you in ad   Here my configuration and data:   INPUTS.CONF ====== [monitor:///opt/splunk/var/log/tesData] disabled = false host = heaveforwarder1   PROPS.CONF =========== [source::///opt/splunk/var/log/tesData] TRANSFORMS-routing=vendorData,secureData   TRANSFORMS.conf ========== [vendorData] REGEX=5617605039838520 DEST_KEY=_MetaData:Index FORMAT=index1 [secureData] REGEX=6794850084423218 DEST_KEY=_MetaData:Index FORMAT=index2  testdata: [08/June/2022:18:23:07] VendorID=5038 Code=C AcctID=5617605039838520 [08/June/2022:18:23:22] VendorID=9109 Code=A AcctID=6794850084423218      
Windows based DNS, does anyone know of a few search examples i could utilize to look up DNS entries Like a A record please?
  index=_internal source=*metrics.log | eval MB=round(kb/1024,2) | search group="per_sourcetype_thruput" | stats sum(MB) by series | eval sourcetype=lower(series) | table index sourcetype "sum(... See more...
  index=_internal source=*metrics.log | eval MB=round(kb/1024,2) | search group="per_sourcetype_thruput" | stats sum(MB) by series | eval sourcetype=lower(series) | table index sourcetype "sum(MB)" | append [| tstats latest(_time) as latest where index=* earliest=-24h by sourcetype |eval LastReceivedEventTime = strftime(latest,"%c") |table index, sourcetype LastReceivedEventTime | eval sourcetype=lower(sourcetype)] | stats values(*) as * by sourcetype | where LastReceivedEventTime != ""     Above query giving me sourtype, latest time stamp and sum(MB), but unable to get index, can someone please help
Hey all, I'm looking for some advice. We currently have multiple ASAs which are sending logs to rsyslog. The logs are stored in folders based on the hostname. I currently have the universal forward... See more...
Hey all, I'm looking for some advice. We currently have multiple ASAs which are sending logs to rsyslog. The logs are stored in folders based on the hostname. I currently have the universal forwarder monitoring the parent folder. When it comes to using Splunk addons such as "Firegen Log Analyzer for Cisco ASA". It's asking to specify an index. Do I need to compile all the logs from the ASAs into a single directory and then create an index for it? Both rsyslog and splunk on are linux (ubuntu) hosts. Any help would be appreciated. Thanks, Will      
Hello everyone. I have set up a cluster of 3 search heads, I have the Serach Head 1 configured as captain, but it turns out that there are times that I do not see events in that same SH1 and SH2, t... See more...
Hello everyone. I have set up a cluster of 3 search heads, I have the Serach Head 1 configured as captain, but it turns out that there are times that I do not see events in that same SH1 and SH2, this causes alerts that I have configured in my SH1 to activate, since they do not events are displayed on the SH1. What I have to do is change how SH3 is populated and the display of events is restored, temporarily solving the problem. After a while I find out that SH1 takes the role of captain again and again I can't view events on SH1. Why could it be happening? Regards.  
I am looking to define globally all of the 'knowledge objects' within a search head. Where is the URL found within Settings? Or is there a different search that would provide the URL? I want to im... See more...
I am looking to define globally all of the 'knowledge objects' within a search head. Where is the URL found within Settings? Or is there a different search that would provide the URL? I want to implement the following search, but need the URL and have not found it as of yet. | rest <URL goes here> splunk_server=local count=0 | rename eai:* as *, acl.* as * | eval updated=strptime(updated,"%Y-%m-%dT%H:%M:%S%Z"), updated=if(isnull(updated),"Never",strftime(updated,"%d %b %Y")) | sort type | stats list(title) as title, list(type) as type, list(orphaned) as orphaned, list(sharing) as sharing, list(owner) as owner, list(updated) as updated by app  
When the syslog daemon writes to the syslog file, what is the time stamp it writes? is it the host date/time or the event date/time? We quite often use the following in the syslog config -  tem... See more...
When the syslog daemon writes to the syslog file, what is the time stamp it writes? is it the host date/time or the event date/time? We quite often use the following in the syslog config -  template("${DATE} ${MSGHDR}${MSG}\n");      
I have connected my blob storage to splunk the files are uploading to the index but the csv format is not working, each line is a different result with no header extraction to every result I have... See more...
I have connected my blob storage to splunk the files are uploading to the index but the csv format is not working, each line is a different result with no header extraction to every result I have tried Sourcetype: mscs:storage:blob:csv and csv but it didn't work when I try to upload the csv as a data input than splunk recognize the csv format and extract the headers correctly. what am I missing? are there more changes needed to extract the fields correctly?
Hello all,   is there any option that may allow me to add a button in a dashboard, to export a dasboard to PDF, in landscape mode, without having to change system settings?   This operation s... See more...
Hello all,   is there any option that may allow me to add a button in a dashboard, to export a dasboard to PDF, in landscape mode, without having to change system settings?   This operation should specifically be applied just to this specific dashboard.   Any idea?
Hi, I just upgraded splunk to 9.0.0 and realized the log ~/var/log/splunk/splunkd.log started to get populated with messages like 06-14-2022 16:41:00.924 +0300 ERROR script [2047436 SchedulerThre... See more...
Hi, I just upgraded splunk to 9.0.0 and realized the log ~/var/log/splunk/splunkd.log started to get populated with messages like 06-14-2022 16:41:00.924 +0300 ERROR script [2047436 SchedulerThread] - Script execution failed for external search command 'runshellscript'. 06-14-2022 16:41:00.906 +0300 WARN SearchScheduler [2047436 SchedulerThread] - addRequiredFields: SearchProcessorException is ignored, sid=AlertActionsRequredFields_1655214060.1451, error=Error in 'script': Script execution failed for external search command 'runshellscript'. The above comes to the logs regardless of whether the alert has been fired or not, and we rely quite heavily on running external scripts to make external systems aware of problems. I thought, now all our script bindings to alerts are broken and we must do a rollback. However, I tested and the scripts were executed nicely. My question is, what has changed here, if anything? I would like to get rid of those messages cluttering the logs in vain. An the other things is, if something else really has changes, what should I do to make splunk happy about the scripts in alerts? I am looking for something else than "Please write a Python script to do the job." Any clues?
Our users have discovered that they can add data to indexes.  This could lead to a user accidently polluting a production index.  I searched the Splunk documentation and the Internet but was unable t... See more...
Our users have discovered that they can add data to indexes.  This could lead to a user accidently polluting a production index.  I searched the Splunk documentation and the Internet but was unable to find a solution.   Does anyone know how we can restrict write-access to indexes to the sc_admin role and allow read access for everyone else?
Hi Splunkers, I'm on an addon creation task, Glassfish in particular and, like other times I faced tese kind or request, I'm configuring the props.conf file. In this configuration I'm facing the ... See more...
Hi Splunkers, I'm on an addon creation task, Glassfish in particular and, like other times I faced tese kind or request, I'm configuring the props.conf file. In this configuration I'm facing the following issue: I know that events starts with two kind of character sequences: [#| Date in format month (3 letters) and day, so for example Jun 07 So, in BREAK_ONLY_BEFORE, i putted the following regex:      [\[\#\|] | [\w{3}\s\d{2}]      and it works fine. A problem rise in the second case: when this events are present, they have a structure with many carriage return. Here a log sample:     Jun 07, 2022 8:29:52 PM <some_path_here>info INFO: JVM invocation command line: -XX:+UnlockDiagnosticVMOptions -XX:MaxPermSize=<size> -XX:PermSize=<size> -XX:NewRatio=<size> -Xms<size> -Xmx4096m <other lines that starts always with - symbol>      In such case, the default event line breaking split every info in this events in a different events. So, I set      SHOULD_LINEMERGE=1     but I have still problems; even with this configuration, the events are not properly merged. What I got are 3 different events splitted in such a way::     Jun 07, 2022 8:29:52 PM <some_path_here>info     first part of info starting with - symbol, so:     INFO: JVM invocation command line: -XX:+UnlockDiagnosticVMOptions -XX:MaxPermSize=<size> -XX:PermSize=<size> -XX:NewRatio=<size> -Xms<size> -Xmx4096m     remaining part of info starting with - symbol, so:     -Djavax.net.<remaining path> -Dcom.sun.enterprise.security.httpsOutboundKeyAlias=<value> -DANTLR_USE_DIRECT_CLASS_LOADING=<value>     To fix this, I tried to use:     MUST_NOT_BREAK_AFTER=[\r\n]+     but it does not work. The event is still divided in the above 3 different parts. How can I fix it?