All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello Splunkers, I just wanted to have someone give me best practice input. My scenario is that I have threat intelligence coming in from Threatconnect. The index is "threatconnect".  Threatconnect... See more...
Hello Splunkers, I just wanted to have someone give me best practice input. My scenario is that I have threat intelligence coming in from Threatconnect. The index is "threatconnect".  Threatconnect is auto-tagging any IOCs related to the solarwinds breach as "solarwinds breach" and I've seen other tags come in with the word "solarwinds"  so I will wildcard it.  The event which this comes in under is in  field "event.ts_detail".   I run this search and I see activity: index=threatconnect event.ts_detail=*solarwinds* However, all the activity I am seeing is an IP brute forcing us constantly. It comes in as this field event.src=45.129.33.129 Therefore, I created an alert with this search which runs every hour: index=threatconnect event.ts_detail=*solarwinds* event.src!=45.129.33.129 My question to you: Is this best practice to set the alert and ignore something that I don't care about (IP 45.129.33.129 since it's only probing).   Would you do it differently?  
Hi @doksu , I've found that the REGEX used by [netfilter_flags] will break if the event contains a sequence of uppercase characters before the actuals flags within the event.  This can occur if the ... See more...
Hi @doksu , I've found that the REGEX used by [netfilter_flags] will break if the event contains a sequence of uppercase characters before the actuals flags within the event.  This can occur if the event contains an action of DROP, ACCEPT or REJECT with whitespace before and after the action.  In these cases, the FLAGS and tcp_flags fields will be set to the action. My fix was to change the REGEX line within the netfilter_flags stanza to: REGEX = \s((?:(ACK|FIN|PSH|RST|SYN|URG)\s)+)      
Hi splunkers, I run splunk cloud and recently worked with Support to install Splunk Enterprise Security.  Within splunk enterprise security how do I confirm that it is correlating all of my indexes... See more...
Hi splunkers, I run splunk cloud and recently worked with Support to install Splunk Enterprise Security.  Within splunk enterprise security how do I confirm that it is correlating all of my indexes?  The reason for asking is that I am not seeing any notable events.  I assume by default, splunk enterprise, out of the box, would see all my indexes and correlate to it's pre-built alerts. 
Hi Spunkers, My organization runs splunk enterprise.  I see that there is a TA installed for Anomali Threatstream.  I am trying to find out which index and sourcetype that it's logs are categorized ... See more...
Hi Spunkers, My organization runs splunk enterprise.  I see that there is a TA installed for Anomali Threatstream.  I am trying to find out which index and sourcetype that it's logs are categorized in so I can run searches against it. It's my understanding that when the app was setup it would have had to been given an index and sourcetype.  What is the best way I can accomplish this?
I have been asked to generate a csv with the indexed information of 1 index after 02:00 hours and that the name of the csv file that is generated has the name of the index and the date, I don't know ... See more...
I have been asked to generate a csv with the indexed information of 1 index after 02:00 hours and that the name of the csv file that is generated has the name of the index and the date, I don't know if it can be concatenated name csv = index_date.csv I know the inputlookup command exists I think it would be something like this index = myindex | inputlookup file.csv but I don't know how to create the complete query in such a way that it generates the file with the name I need for example firewall_20122020 firewall_21122020 firewall_22122020 firewall_23122020  
Hi guys, I have installed the TA-jira-service-desk-simple-addon on our Splunk instance and everything went well during the configuration. I was able to fetch and get a list of my JIRA projects from ... See more...
Hi guys, I have installed the TA-jira-service-desk-simple-addon on our Splunk instance and everything went well during the configuration. I was able to fetch and get a list of my JIRA projects from Splunk. The problem was when trying to automate an adaptive response to create a ticket on JIRA from a correlation search [in Splunk Enterprise Security]. The adaptive response failed multiple times even when trying to run it manually 'adhoc' Does anyone know what might be the cause of that issue? Noting that I have followed the JIRA service desk documentation by the book  
Hi I have a working Splunk 7.3.4 , for few last days I noticed that there are issues in LDAP connection settings LDAP requires to type the Bind DN Password again to get the updated DLs list or to r... See more...
Hi I have a working Splunk 7.3.4 , for few last days I noticed that there are issues in LDAP connection settings LDAP requires to type the Bind DN Password again to get the updated DLs list or to run ldapsearch  I don't find any authentication errors in splunkd.log please assist to understand what might be the issue    
hello guys i am new in splunk world, i want to create a report that show total inbound traffic in Mb. here is my search code :            sourcetype=fgt_traffic dest=1.1.1.* NOT (src=1.1.1.* O... See more...
hello guys i am new in splunk world, i want to create a report that show total inbound traffic in Mb. here is my search code :            sourcetype=fgt_traffic dest=1.1.1.* NOT (src=1.1.1.* OR dest=skyroom.online) bytes_in>0 AND action ="allowed"           and here is my pivot visual with table entry.  
I want to exclude the (dst="10.0.0.0/8" OR dst="172.16.0.0/12" OR dst="192.168.0.0/16")  IP ranges.    my configurations: props.conf: TRANSFORMS-null = internal_Logs10, internal_Logs172, internal... See more...
I want to exclude the (dst="10.0.0.0/8" OR dst="172.16.0.0/12" OR dst="192.168.0.0/16")  IP ranges.    my configurations: props.conf: TRANSFORMS-null = internal_Logs10, internal_Logs172, internal_Logs192 Transforms.conf: [internal_Logs10] REGEX = dst\=10\.0\.0\.0\/8 DEST_KEY = queue FORMAT = nullQueue [internal_Logs172] REGEX = dst\=172\.16\.0\.0\/12 DEST_KEY = queue FORMAT = nullQueue [internal_Logs192] REGEX = dst=192\.168\.0\.0\/16 #REGEX = dst=192\.168\.5.* DEST_KEY = queue FORMAT = nullQueue   it works perfectly for 192.168.5.* but not for subnet range. kindly share or assist with configuration around the same.
Hi All, I'm trying to figure out a way to setup a splunk alert to do the following... When the string "GFX_On" is found in our log there should always be a "GFX_Off" string found no longer than 15 ... See more...
Hi All, I'm trying to figure out a way to setup a splunk alert to do the following... When the string "GFX_On" is found in our log there should always be a "GFX_Off" string found no longer than 15 minutes after. We want splunk to alarm if it doesn't find the "GFX_Off" within 15 minutes of the last "GFX_On" it saw. Basically this is a system that fires Graphics on and off on a video production system. We want to get alerted if the "GFX_Off" command doesn't fire into our logs within 15 minutes. Hope this makes sense. Would really appreciate any help as I'm not even sure where to start. I think I would need to do an if statement of some kind in the search. Thanks!
Hi everyone, I`m receiving multiple JSON events as one event from third party application as showned below.   {"metric":"host1.adapter.DEMO.ALL.in.error","event":"metric","type":"m","value":0} {"m... See more...
Hi everyone, I`m receiving multiple JSON events as one event from third party application as showned below.   {"metric":"host1.adapter.DEMO.ALL.in.error","event":"metric","type":"m","value":0} {"metric":"host1.adapter.DEMO.ALL.in.filter","event":"metric","type":"m","value":0} {"metric":"host1.adapter.DEMO.ALL.in.total","event":"metric","type":"m","value":996} {"metric":"host1.adapter.DEMO.ALL.out.error","event":"metric","type":"m","value":0} {"metric":"host1.adapter.DEMO.ALL.out.total","event":"metric","type":"m","value":996}     I tried to use spath & mvexpand commands, to split it to a separate events. But couldn`t get results  as i expected. Finnaly, i need to apply my  search to get total count by separate metric value as shown below:   source="tcp:10244" sourcetype="json_no_timestamp"| spath metric | search metric=" host1.adapter.DEMO.WebLogicInputFlow.out.total " | sort _time | autoregress "value" p=1 | eval diff=if(value>value_p1, max(value)-min(value_p1), null()) | timechart span=60s sum(diff) as total_count     here is my props.conf lines: [adapter:json] INDEXED_EXTRACTIONS = json KV_MODE = none AUTO_KV_JSON = false Any help is appreciated.
How do I resolve this error -> happens with both linux and mac The TCP output processor has paused the data flow. Forwarding to host_dest=192.168.1.5 inside output group default-autolb-group from ho... See more...
How do I resolve this error -> happens with both linux and mac The TCP output processor has paused the data flow. Forwarding to host_dest=192.168.1.5 inside output group default-autolb-group from host_src=MacBook-Air.local has been blocked for blocked_seconds=10. This can stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data
Hi at all, I developed an app that uses a KV Store to manage a whitelist and it runs without problems. But when I started to use true data I found that I could have around 140,000 rows in my KV Sto... See more...
Hi at all, I developed an app that uses a KV Store to manage a whitelist and it runs without problems. But when I started to use true data I found that I could have around 140,000 rows in my KV Store and I know that the limit is 50,000. What could be the workaround to avoid to rebuild my App? what could be the problems setting the max_rows_per_query = 150000 ? Is there another way to manage 140.000 rows in a table? Thanks in advance. Ciao. Giuseppe
hi all    i wanted to integrate scom with splunk  please suggest best practices and best options  
I have multiple cells that contain information like: URL Name When I click on the cell, I want to open a tab to URL, and ignore "name". For example, in this table when I click on the cell, I ... See more...
I have multiple cells that contain information like: URL Name When I click on the cell, I want to open a tab to URL, and ignore "name". For example, in this table when I click on the cell, I want to navigate to someurl, assuming someurl is an actual webpage link. | makeresults | eval A="someurl",B="someurl",C="someurl",A_name="somename",B_name="somename",C_name="somename" | eval As = mvappend(A,A_name) | eval Bs = mvappend(B,B_name) | eval Cs = mvappend(C,C_name) | table As Bs Cs  
Hello, I have been reading all the blogs  around this subject, some questions I have had answered, but in this case I am not sure how to approach it.  Scenario:  Three fields: 1. RecordStage, 2. p... See more...
Hello, I have been reading all the blogs  around this subject, some questions I have had answered, but in this case I am not sure how to approach it.  Scenario:  Three fields: 1. RecordStage, 2. pdfRecord 3. csvRecord The RecordStage is a field I have created that has all the values I need. I just want to know the following: If RecordStage=0 display 0 If RecoredStage>1 indicate if its logged in the pdfRecord  or in the csvRecord by indicating, Yes or No, All for being logged in both.      (index="xyz" ) OR (index="123" ) | stats values(compGen) as compGen values(levels) as levels count(eval(like(level,"RecordStage%"))) AS RecordStage values( Result) as Result by TextDoc | eval Reslut =if(RecordStage=0, "0" AND (RecordStage=>1 AND RecordStage=pdfRecord OR RecordStage=csvRecord ), "Yes", "No", "In Both Fields"))    
So i've been using splunk for a while now and it's fine. To access the console, I use an SSH Tunnel porting localhost 9002 to splunk server web console on port 8000. It's been working fine until rece... See more...
So i've been using splunk for a while now and it's fine. To access the console, I use an SSH Tunnel porting localhost 9002 to splunk server web console on port 8000. It's been working fine until recently. I think someone had modified the web.conf or installed some splunk app. I used to be able to go to https://localhost:9002 to access the splunk UI. But now when I go there, the URL changes to http://127.0.0.1:8000/en-US/ (what it's running on, on the server), how to I stop it from changing the url like this?
Hello fellow Splunk users, I understand it is possible to default in a single value in the event a lookup is not found. In my case I have a CSV where we lookup a TenantId, if its found we retrieve ... See more...
Hello fellow Splunk users, I understand it is possible to default in a single value in the event a lookup is not found. In my case I have a CSV where we lookup a TenantId, if its found we retrieve Tenant Name, latitude and longitude for geostats purposes. What I'd like to do return a default name, latitude and longitude in the event a lookup doesn't match a TenantId in our lookup.  All help appreciated. Thx in advance.
Hi,  I have multiple files being delivered on a daily basis are in the below format: <filename>.<yyyymmdd>.xml - Example: price.daily.20201218.xml Example of an event when a file has been delivere... See more...
Hi,  I have multiple files being delivered on a daily basis are in the below format: <filename>.<yyyymmdd>.xml - Example: price.daily.20201218.xml Example of an event when a file has been delivered is:   2020-12-11 06:17:47 INFO : File_created=/current/price.daily.20201210.xml; file_size=86324624 2020-12-11 06:17:47 INFO : File_created=/current/test.daily.20201210.xml; file_size=6896548 2020-12-11 06:17:47 INFO : File_created=/current/price.daily.sources.20201210.xml; file_size=48526     I am trying to build a query to check for some specific set of files (I'm not interested in all files), whether they have been delivered and report the status in a table.  To achieve this, I have a lookup file with Filename and Group as columns Filename Group price.daily Pricing price.daily.vendor Pricing price.daily.source Pricing test.daily Testing   I would like to have a table to display if the files belonging to group Pricing have been delivered for a specific date. Can anyone please advise the best way to achieve this? FileName Group Status Time price.daily.20201210.xml Pricing Delivered 2010-12-11 06.17.47 price.daily.vendor.20201210.csv Pricing Pending N.A price.daily.source.20201210.xml Pricing Delivered 2010-12-11 06.17.47
I am working on using prediction for alerting. I want to track some data I have that has some business trends; it is fairly seasonal. Right now we manually calculate a baseline; which only works if o... See more...
I am working on using prediction for alerting. I want to track some data I have that has some business trends; it is fairly seasonal. Right now we manually calculate a baseline; which only works if our data is stable. So I have found as our traffic rises and falls through the seasonality of it; our calculated baseline is inaccurate most of the time. I have created a prediction based alert but it has not been received too well since it uses standard deviation. Is there an issue to base alerting from standard deviation? Just trying to under stand if there is some limitation or if it is error prone. The data I am getting seems to be accurate thresholds for the alerts.