All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

will there be an upgrade or do I have to go back to Splunk 7.x to use that App?
Hi, I am trying to set a token to display a part of my dashboard only if the value of one of the field I've got in my search is equal to a certain string. Here is an example:     <search> <query... See more...
Hi, I am trying to set a token to display a part of my dashboard only if the value of one of the field I've got in my search is equal to a certain string. Here is an example:     <search> <query>| makeresults |eval is_Valid=(if("Valid" == "Valid","true","false"))</query> <done> <condition match="'result.is_Valid'==true"> <set token="Display_Html">true</set> </condition> </done> </search>       However, this condition doesn't seem to work. I can clearly see my makeresults command works in a search or even in the dashboard, but the condition + set token doesn't seem to work. Does anybody have got any idea what I'm doing wrong please? Thank you Regards  Laurent
When using the predict command the time chart shows the calculated time chart value but also has the prediction line following it. For example I have:     <search>.... | timechart count as Vol... See more...
When using the predict command the time chart shows the calculated time chart value but also has the prediction line following it. For example I have:     <search>.... | timechart count as Volume | predict Volume     What happens with this is that my historical values have both the Volume and prediction(Volume) lines. What I would like is for Volume to show until the last calculated point, then switch over and show the prediction(Volume). Is there a way anyone has achieved this? 
I discovered our logs were split between events. I notice that Splunk split the event ANY date and time it found in our logs. See below. <ResponseEndTimestamp>11/05/2020 09:53:33</ResponseEndTimesta... See more...
I discovered our logs were split between events. I notice that Splunk split the event ANY date and time it found in our logs. See below. <ResponseEndTimestamp>11/05/2020 09:53:33</ResponseEndTimestamp> </RCExtResponse> 2020-11-05 08:53:36,916 [http-nio-8080-exec-4] [198.153.9.206||1573FF21ECE6B4E4DA213F08E73230B3|] INFO c.v.c.d.DrFirstGatewayService - Retrived patient object(Patient:...) .... 2020-11-05 08:53:37,110 [http-nio-8080-exec-4] should have started a new event. To fix, I wanted to defined a BREAK_ONLY_BEFORE in the Source Type. Unfortunately, the Web UI keeps changing values when I save. See before and after. Before Before After After Has anyone encountered this? Any help would be greatly appreciated. Josh
Hi Everyone, So I'll try and make this as clear as possible, but it's quite hard to explain it in depth. What I'm trying to do is map a certain numeric field value called "ordernumber" to a lookup ... See more...
Hi Everyone, So I'll try and make this as clear as possible, but it's quite hard to explain it in depth. What I'm trying to do is map a certain numeric field value called "ordernumber" to a lookup file (for example 10). This lookup file contains a start range and stop range (for example 1-100) to identify which orders belong to which suppliers. A supplier will fill an unique order number in everytime it sends the order. Basicly it's described in this topic before: https://community.splunk.com/t5/Splunk-Search/How-do-you-create-a-lookup-from-a-range-of-values/m-p/423288   To make this work I used the following command: | map search="| inputlookup supplier_range.csv | search To > $Ordernumber$ AND From < $Ordernumber$ |eval Ordernumber=\"$Ordernumber$\", details=\"$details$\", Matnumber=\"$matnumber$\" " maxsearches=1000000000000 However, now I have one issue, which is the following. Currently, not all possible ordernumber ranges are completely defined and suppliers sometimes make typo's when filling in the forms. In this case, I want to keep the events containing the "wrong"  number and just return an empty result for supplier name, which will be retrieved from the look-up file. I tried performing an if statement, but can't seem to get it to work. Then it does not return any results anymore. "| eval name=if(To > $ $Ordernumber$ AND From < $Ordernumber$, "Name", EMPTY)"   I'm kinda stuck on this one. Is this even doable via a map command or should I be using any other form of command? And how?   In case I need to clarify anything please let me know. Kind regards,
Hi, I have the following String that is logged by the application and I am wondering if there is a way to pretty print it just like the rest of the logs. Here is the raw data : {"timestamp":"20... See more...
Hi, I have the following String that is logged by the application and I am wondering if there is a way to pretty print it just like the rest of the logs. Here is the raw data : {"timestamp":"2020-11-10T15:27:02.187Z","level":"INFO","thread":"main","logger":"ca.nbc.payment.pmtinternationallibrary.config.MyApplicationContextInitializer","message":"{\"code\": \"CODE\",\"text\":null,\"origin\":null,\"rule\": \"RULE\"}","context":"default"} I guess it has something to do with the characters being escaped but I did not find anything that got it to work properly. I would like to have something like :  { "timestamp": "2020-11-09T20:54:57.245Z", "level": "INFO", "thread": "main", "logger": "ca.nbc.payment.pmtinternationallibrary.config.MyApplicationContextInitializer", "message": {     "code": "CODE",     "text": null,     "origin": null,     "rule": "RULE"}, "context": "default" }   Thanks
Hi Splunkers, Is there any way to get rid of this knonw issue on Stream app ?  Currently, I'm collecting DNS logs via Stream App on windows servers and streamfw.exe stopping without any reason so... See more...
Hi Splunkers, Is there any way to get rid of this knonw issue on Stream app ?  Currently, I'm collecting DNS logs via Stream App on windows servers and streamfw.exe stopping without any reason somehow but UF is still running. This is a known issue written in the Stream docs. When I dig into the internal logs and server logs, I couldn't find any related logs.  now, I wrote a py to add a new txt file on Deployment server and reload the class then erase it for every 12 hours. this is my little workaround but Its not efficient, I can't know when  they stops streaming and it means losing data till UFs restart time.  Do you guys any other workaround for that ?    the known issue is; Windows: Capture stops with "pcap_loop returned error code -1 read error: PacketReceivePacket failed; network capture stopped" and isn't restarted Workaround: Manually re-configure streams for the forwarder to resume or restart Splunk Forwarder service in Windows      
I'm trying to follow guides on how to create a new indexed field. Basically creating a field that gives us the name of the hf the data came from: "Splunk_HF" Im having a hard time understanding how ... See more...
I'm trying to follow guides on how to create a new indexed field. Basically creating a field that gives us the name of the hf the data came from: "Splunk_HF" Im having a hard time understanding how to actually grab the heavy forwarders name. If this was a raw log i would attempt to do regex on the host based on where that name in we can within the log but here in drawing a blank. It's like I'm pulling the info from the air and I simply don't know the right syntax to make this happen I'm sure plenty have done it his before and it should be similar for each of us. Can someone please stir me in the right direction with my configurations   This is my idea so far, can someone please correct my mistakes Transforms.conf [getting_splunk_forwarder] DEST_KEY = MetaData:Host REGEX = I have no idea Format = host::$1 Props.conf TRANSFORMS-extract = getting_splunk_forwarder Fields.conf [getting_splunk_forwarder] INDEXED = true   I really do appreciate splunk docs and generally people will post links for answers by themselves, but if someone could please show me the proper syntax in the stanza it will help me understand.
Hi All, I have a dashboard with highly imp info that needs to be seen by every user.Im looking for an option to add the link of the dashboard  in the search app in between search bar and search hist... See more...
Hi All, I have a dashboard with highly imp info that needs to be seen by every user.Im looking for an option to add the link of the dashboard  in the search app in between search bar and search history.How do I do this.Thanks
Hi, plan is to splunk all PC in network and need to know what is average usage per client for both linux and windows (with sysmon). I understand that is per case to case, but some kind of average wh... See more...
Hi, plan is to splunk all PC in network and need to know what is average usage per client for both linux and windows (with sysmon). I understand that is per case to case, but some kind of average what is? i need to know because of storage planing ... thank you
I am looking to count the number of events that occur before and after a specified time (8am) each day to give a table like for example: Events before 8am Events after 8am 400 50   Any ... See more...
I am looking to count the number of events that occur before and after a specified time (8am) each day to give a table like for example: Events before 8am Events after 8am 400 50   Any help greatly appreciated?
I am receiving CPU utilization alerts frequently. Please help me how to troubleshoot and find rootcause. @thambisetty @isoutamo 
Hello, I am running a search for last 7 days results, and i am using fixed_date field as _time field. fixed_date can have any value in last 1 year, so I filtering for results of last 6 months.  I ... See more...
Hello, I am running a search for last 7 days results, and i am using fixed_date field as _time field. fixed_date can have any value in last 1 year, so I filtering for results of last 6 months.  I want the weekly results to show for every Monday. Below query shows results for last 2 Mondays, but then it pickup Thursday.        index=abcd sourcetype=abcd (IP=x.x.x.x OR IP=y.y.y.y) | eval _time=strptime(fixed_date,"%Y-%m-%d") | where _time > relative_time(now(), "-6mon") | bin _time span=w@w1 | stats count by IP ID _time | stats count as "Fixed vulnerabilities" by _time Results I get: _time Fixed vulnerabilities 2020-05-07 3678 2020-05-14 1455 ....<few weekly results for total 6 months> ...... 2020-10-22 5543 2020-10-29 2212 2020-11-02 7732 2020-11-09 2213     Only last 2 are Mondays, but all before those are Thursdays. how to get it for every Monday?
Hello Splunkers, I'm actually trying to extract the "flags" field in the DNS logs. Meanwhile, the TA provided by Splunk isn't working properly because all logs haven't the same form. This is the r... See more...
Hello Splunkers, I'm actually trying to extract the "flags" field in the DNS logs. Meanwhile, the TA provided by Splunk isn't working properly because all logs haven't the same form. This is the regex provided by Splunk to extract flags field :     (?<operation>[ R]) (?<opcode>.) \[(?<hexflags>[0-9A-Fa-f]+) (?<flags>....) (?<response>[^\]]+)\]     Log form case 1 :      R Q [8381 A DR NXDOMAIN]     Log form case 2 :     R Q [8081 DR NOERROR]     Log form case 3 :   Q [0001 D NOERROR]   In the case 1 the extract work properly and the field "flags=A DR" but in the case 2 or 3 none of the fields are extracted (because there are only 3 digits in the [] instead of 4). Any idea how to process to extract these values ?
I would want to send logs from one namespace to a separate index where all other logs are send out to one index. I am using  splunk-connect HEC to forward that from the openshift cluster. Can anyone ... See more...
I would want to send logs from one namespace to a separate index where all other logs are send out to one index. I am using  splunk-connect HEC to forward that from the openshift cluster. Can anyone guide how it can be done? I tried indexRouting=true and adding a local splunk in values file of helm chart. But, i observe token stored in env is only for local and the global value seems to give an error   --> "text":"Incorrect index","code":7,"invalid-event-number":1} "
Hello Splunkers: How do I configure a custom text "Big Data Analytics" on the login screen? Please help me, Thanks in advance!  
Hi All, We are  performing  an impact analysis on the application data which are already getting ingested into splunk, as in future the same application data will be ingested from bolt application, ... See more...
Hi All, We are  performing  an impact analysis on the application data which are already getting ingested into splunk, as in future the same application data will be ingested from bolt application, so that when Bolt application is fully functional we can identify and correct if something is going wrong. In-order to do that analysis, as a first step we wanted to know about the server details, log format and list the dashboards/alerts/savedsearches configured to this application which are already getting ingested into splunk. We  got the index and sourcetype details from the respective application owners and using this we need to fetch the server details/dashboards/alerts/data models/saved searches.  So is there any search query / Splunk REST API query which can be used to fetch these details from the splunk console.   
Is there a way to tell which method a sourcetype is using to get data into splunk?  For example, suppose I look at the sourcetype of an index named main |metadata type=sourcetype index=main It d... See more...
Is there a way to tell which method a sourcetype is using to get data into splunk?  For example, suppose I look at the sourcetype of an index named main |metadata type=sourcetype index=main It display a list of sourcetypes but I want to know if those sourtypes of syslog, from a heavy forwarder, or from a universal forwarder.  Is that possible? 
We have 2 index 1. Having user name and his machine details and everything about his login   2. User name and his actual machine behaviour like link speed, could usage, memusage etc.   I am tryi... See more...
We have 2 index 1. Having user name and his machine details and everything about his login   2. User name and his actual machine behaviour like link speed, could usage, memusage etc.   I am trying to frame a query where both indexes will be called and getting an output something like username, user machine, user machine make, os details, CPU, mem, ......     I tried 2 ways from last 2 days 1. (Index 1 sourcetype) or (index 2 source type) | eval which I want | rename cols | table columns Outcome : details from index 1 appear but not from index2 2. Index 1 sourcetype | join [search index 2 sourcetype] | eval which I want | rename cols | table columns Outcome : details of few columns are not visible rest appear     Please help which command and how do I frame query to get my panel completed.   Thanks.
I understand the error has to do with disk space but I have no idea how to actually fix the issue. I know how to locate the files but I'm not sure what's taking up my memory usage. I tried troubles... See more...
I understand the error has to do with disk space but I have no idea how to actually fix the issue. I know how to locate the files but I'm not sure what's taking up my memory usage. I tried troubleshooting myself but can't seem to get a solid answer. I'm also kind of new to Splunk as I'm a cybersecurity student. Would appreciate some help! Thanks.