All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Can you please help with extracting the fields from the below sample log. I am unable to escape the "'// &" '" in the log using regex. I am trying to extract upstream_response_time and connection_re... See more...
Can you please help with extracting the fields from the below sample log. I am unable to escape the "'// &" '" in the log using regex. I am trying to extract upstream_response_time and connection_requests.  {"log":"[1] api_access.log: [1618591069.220218866, { \"msec\": 1618591069.219, \"remote_addr\": \"10.248.32.1\", \"x_forwarded_for\": \"10.233.42.16, 10.248.32.1\", \"remote_port\": \"41474\", \"pipelined\": \".\", \"body_bytes_sent\": \"4554\", \"bytes_sent\": 5150, \"request_time\": 1.066, \"upstream_response_time\": \"1.066\", \"upstream_response_length\": \"18456\", \"upstream_status\": \"200\", \"kore_route\": \"-\", \"koreserver\": \"KoreServer/\", \"host\": \"app-artificial-intelligence-dev.t3-openshift1.*\", \"hostname\": \"kore-app-62-g6522\", \"server_name\": \"_\", \"request_completion\": \"OK\", \"status\": 200, \"connection_requests\": 2, \"request_uri\": \"/api/1.1/builder/streams/st-332b9e29-e487-567e-b382-56e0fa4beb9d/dialogs/dg-3558dfff-9932-5640-a364-5f7202d5dfc8/components?rnd=qdzsm9\", \"request_method\": \"GET\", \"request_content_type\": \"application/json;charset=UTF-8\", \"request_content_length\": \"-\", \"request_total_length\": 1735, \"args\": \"rnd=qdzsm9\",\"is_args\": \"?\", \"x-traceid\" "-\", \"http_user_agent\": \"Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.150 Safari/537.36\" }\", \"podname\"=\u003e\"kore-app-62-g6522\"}]\n","stream":"stdout","time":"2021-04-16T16:37:49.39632196Z"}
This is probably my favorite Splunk quote, but I no Idea what it means. Can anyone point me in the right direction? "All Batbelt. No Tights" -Marco
Hi, I'm kind of new on the Splunk world and I'm trying to create new extraction field.   Here are two examples of my logs. 14394300 SERVER1 02772 SND_OK 0000 NbF=1;TEST2N.02503.02772.SERVER2; 16... See more...
Hi, I'm kind of new on the Splunk world and I'm trying to create new extraction field.   Here are two examples of my logs. 14394300 SERVER1 02772 SND_OK 0000 NbF=1;TEST2N.02503.02772.SERVER2; 16434800 SERVER6 67965 SND_OK 0000 NbF=1;XXXRD.NN0015.67965.SERVER1; I don't know how to extract the information in bold. My extract/transform looks like this:   (?P<time>\d+)\s+(?P<sdr>\w*)\s+(?P<seq>[^ ]*)\s+(?P<status>[^ ]+)\s+(?P<errorCode>\d+)\s+(?P<Rtn>.+) My fields work correctly for my use (and different cases) but now, I'm trying to be more accurate for <Rtn> <Rtn> now is: NbF=1;XXXRD.NN0015.67965.SERVER1; What I need is just the NN0015 or 02503. I tried with "positive lookbehind" or "positive lookahead" without any success.   Is it possible to have some help? Thanks!
Hello, I'm quite new to Splunk and recently installed an instance on a debian machine. When trying to upload a log File i got the Issue that the depth_limit in limits.conf was not big enough to exec... See more...
Hello, I'm quite new to Splunk and recently installed an instance on a debian machine. When trying to upload a log File i got the Issue that the depth_limit in limits.conf was not big enough to execute the operation. The documentation states that the default file is located under $SPLUNK_HOME/etc/system/default/ und the custom configurations are stored under $SPLUNK_HOME/etc/system/local/.  But neither of these folders exist inside the splunk home directory.  Any ideas why and what i can do to achive it? I hope you can help me out. Thank you in advance.
Hello, I am ingesting file auditing logs to monitor changes to certain files. I am monitoring events 4663 and 4656 which have an Object Name that lists the file path of the accessed file. We are onl... See more...
Hello, I am ingesting file auditing logs to monitor changes to certain files. I am monitoring events 4663 and 4656 which have an Object Name that lists the file path of the accessed file. We are only concerned about .pdf, .doc, and .docx files and would like to filter out any other file type. Currently I have the below whitelist in place but I am getting stuck trying to figure out how to only ingest certain file types (.pdf, .doc, and .docx) under the Object Name field within the log.  whitelist = 4663,4656 Here is a log 4663 example (I bolded the part we want to filter):   An attempt was made to access an object. Subject: Security ID: Test User Account Name: Test User Account Domain: Test Logon ID: 12345567 Object: Object Server: Security Object Type: File Object Name: D:\Data\Test\Test.pdf Handle ID: 0x0552 Any help would be great. Thank you.
How do I increase the number of lines per reports generated by data mapping from say 8000 to 40,000. The reports I get have about 8,000 & I would like o increase this number
  I recently started learning Splunk . Could you help me!! Have list of users and particular looking for search query to fetch login attempts of those list of user accounts.
Hi,  Below is a result of a lookup command, how do I exclude the other information if I based in on BusinessUnit, For ex. I want to show BU2 only...  but there maybe cases that I need to show BU1 on... See more...
Hi,  Below is a result of a lookup command, how do I exclude the other information if I based in on BusinessUnit, For ex. I want to show BU2 only...  but there maybe cases that I need to show BU1 only. How can I filter my lookup result? Application BusinessUnit DATE CALCMIPS   App1 App2 App3 App4 BU1 BU2 BU1 BU1 31DEC2020 20       My splunk query looks like index=index1 sourcetype=source1 [ |inputlookup Application.csv where BusinessUnit = BU1 | return 1000 ACCOUNT_CODE] |  lookup Application.csv ACCOUNT_CODE OUTPUT Application BusinessUnit ApplicationRTO | table Application BusinessUnit DATE MVS_SYSTEM_ID CALCMIPS Thanks and Regards,
Randomly stopped ingesting data about 2 weeks ago.  I tried installing it on a seperate front end to see if it will start importing data again and it wont.  I am using Security Center.sc 5.14.1 it wa... See more...
Randomly stopped ingesting data about 2 weeks ago.  I tried installing it on a seperate front end to see if it will start importing data again and it wont.  I am using Security Center.sc 5.14.1 it was working fine for a little over a month.  When I change the input I see it authenticate to tenable with the User credentials.  But the automated timer to start the job does not appear t oever kick off in I express in seconds, or if I do it using chron format.  It was working fine using Chron 15 */1 * * *.  I tried moving to 3600.  Changed start time to blank....  Any ideas?
Hi Splunkers, I need some help with a regex/command to extract the file name from the file path : path\\to\\the\\file\\file_name or path\\to\\the\\file\\file_name (path\\inside\\file) Actually ... See more...
Hi Splunkers, I need some help with a regex/command to extract the file name from the file path : path\\to\\the\\file\\file_name or path\\to\\the\\file\\file_name (path\\inside\\file) Actually I have the EVAL command in my props.conf : EVAL-file_name = mvindex(split(filePath,"\\"),-1) The EVAL command working fine for most of the paths. But sometimes, the path is not common and contains  parentheses and backslashs after the file_name value... This is some examples of unusual paths I encountered (what I want to extract is in bold) : T:\\test\\FileZilla_3.47.2.1_win64_sponsored-setup.exe (NONAMEFL) C:\\Users\\testuser\\Desktop\\testuser\\Local Settings\\Temporary Internet Files\\Content.IE5\\test\\ocspackage[1].exe($PLUGINSDIR\\$PLUGINSDIR\\RemCom.exe) C:\\TEST\\testing\\@Archives\\@SRV\\SRV_Servers\\tests\\ocs-inventory\\OCSNG_AGENT_DEPLOYMENT_TOOL_1.0.1.2.zip ($INSTDIR\\RemCom.exe) With my actual configuration I extract only the value after the last "\\" of the line... Could you help me to construct that regex/command to be able to exctract the right values ? Thanks
Hi,  I have a dashboard like that :  All panels are based on a basesearch begins like that :  index=test sourcetype=st_test $text$  The token "text" is associated to the text input (it is wha... See more...
Hi,  I have a dashboard like that :  All panels are based on a basesearch begins like that :  index=test sourcetype=st_test $text$  The token "text" is associated to the text input (it is what I want to improve). Here is the basic input : <input type="text" token="text" searchWhenChanged="true"> <label>Raw Document Text Search</label> <default>*</default> </input> The wish is I want to be able to click on any cell of the panel containing a table and that the whole dashboard is filtered according to this value. Today I have to copy a Id value (for example, it could be values of other columns) and paste it in the text box input. I want to mechanize this process.  Do you think it is possible ? If yes how can I do that ? Thanks for your help !
Hi,  I am trying to do the following: 1. Using this | inputlookup Application.csv where BusinessUnit = BU1, it will filter a list of Account Codes e.g. AC1, AC2, AC3 2. I want to use that list of ... See more...
Hi,  I am trying to do the following: 1. Using this | inputlookup Application.csv where BusinessUnit = BU1, it will filter a list of Account Codes e.g. AC1, AC2, AC3 2. I want to use that list of Account Codes to filter my search on a different sourcetype. index=index1 sourcetype=sourctypeN ACCOUNT_CODE = "AC1" or ACCOUNT_CODE "AC2" and so on.. Thanks!
I am running Splunk Enterprise 8.0.6 and have Hadoop Data Roll configured, using Hadoop 3.2.1 with Java 1.8.0_282-b08. I have a virtual index configured to archive an index to AWS S3. The Hadoop Data... See more...
I am running Splunk Enterprise 8.0.6 and have Hadoop Data Roll configured, using Hadoop 3.2.1 with Java 1.8.0_282-b08. I have a virtual index configured to archive an index to AWS S3. The Hadoop Data Roll archiving process to S3 works, and the archived index is created in S3. However, when I try to search that archived index located in S3, I get the error below (which is from search.log). Has anyone run into this problem and know of a solution?   04-16-2021 08:59:01.938 ERROR ERP.s3_provider - SearchOutputStream - java.lang.RuntimeException: Configuration was not set. stacktrace=[com.splunk.roll.util.ConfU.force(ConfU.java:38), com.splunk.roll.util.ConfU.getRemoteHome(ConfU.java:56), com.splunk.roll.util.ConfU.getRollRoot(ConfU.java:48), com.splunk.roll.PathResolver.createV3(PathResolver.java:229), com.splunk.roll.PathResolver.createWithVersion(PathResolver.java:211), com.splunk.roll.PathResolver.resolveBuckets(PathResolver.java:152), com.splunk.mr.input.VirtualIndex.generateSplits(VirtualIndex.java:1644), com.splunk.mr.input.VirtualIndex.generateSplits(VirtualIndex.java:1609), com.splunk.mr.input.VixSplitGenerator.generateSplits(VixSplitGenerator.java:62), com.splunk.mr.input.VixSplitGenerator.generateSplits(VixSplitGenerator.java:34), com.splunk.mr.SplunkMR$SearchHandler.streamData(SplunkMR.java:809), com.splunk.mr.SplunkMR$SearchHandler.executeImpl(SplunkMR.java:1089), com.splunk.mr.SplunkMR$SearchHandler.execute(SplunkMR.java:906), com.splunk.mr.SplunkMR.runImpl(SplunkMR.java:1804), com.splunk.mr.SplunkMR.run(SplunkMR.java:1553), org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76), org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90), com.splunk.mr.SplunkMR.main(SplunkMR.java:1841)] 04-16-2021 08:59:01.938 ERROR ERP.s3_provider - java.lang.Exception: java.lang.RuntimeException: Configuration was not set. stacktrace=[com.splunk.roll.util.ConfU.force(ConfU.java:38), com.splunk.roll.util.ConfU.getRemoteHome(ConfU.java:56), com.splunk.roll.util.ConfU.getRollRoot(ConfU.java:48), com.splunk.roll.PathResolver.createV3(PathResolver.java:229), com.splunk.roll.PathResolver.createWithVersion(PathResolver.java:211), com.splunk.roll.PathResolver.resolveBuckets(PathResolver.java:152), com.splunk.mr.input.VirtualIndex.generateSplits(VirtualIndex.java:1644), com.splunk.mr.input.VirtualIndex.generateSplits(VirtualIndex.java:1609), com.splunk.mr.input.VixSplitGenerator.generateSplits(VixSplitGenerator.java:62), com.splunk.mr.input.VixSplitGenerator.generateSplits(VixSplitGenerator.java:34), com.splunk.mr.SplunkMR$SearchHandler.streamData(SplunkMR.java:809), com.splunk.mr.SplunkMR$SearchHandler.executeImpl(SplunkMR.java:1089), com.splunk.mr.SplunkMR$SearchHandler.execute(SplunkMR.java:906), com.splunk.mr.SplunkMR.runImpl(SplunkMR.java:1804), com.splunk.mr.SplunkMR.run(SplunkMR.java:1553), org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76), org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90), com.splunk.mr.SplunkMR.main(SplunkMR.java:1841)] 04-16-2021 08:59:01.938 ERROR ERP.s3_provider - at com.splunk.mr.SplunkMR.run(SplunkMR.java:1569) 04-16-2021 08:59:01.938 ERROR ERP.s3_provider - at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) 04-16-2021 08:59:01.938 ERROR ERP.s3_provider - at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) 04-16-2021 08:59:01.938 ERROR ERP.s3_provider - at com.splunk.mr.SplunkMR.main(SplunkMR.java:1841) 04-16-2021 08:59:01.938 ERROR ERP.s3_provider - Caused by: java.lang.RuntimeException: Configuration was not set. stacktrace=[com.splunk.roll.util.ConfU.force(ConfU.java:38), com.splunk.roll.util.ConfU.getRemoteHome(ConfU.java:56), com.splunk.roll.util.ConfU.getRollRoot(ConfU.java:48), com.splunk.roll.PathResolver.createV3(PathResolver.java:229), com.splunk.roll.PathResolver.createWithVersion(PathResolver.java:211), com.splunk.roll.PathResolver.resolveBuckets(PathResolver.java:152), com.splunk.mr.input.VirtualIndex.generateSplits(VirtualIndex.java:1644), com.splunk.mr.input.VirtualIndex.generateSplits(VirtualIndex.java:1609), com.splunk.mr.input.VixSplitGenerator.generateSplits(VixSplitGenerator.java:62), com.splunk.mr.input.VixSplitGenerator.generateSplits(VixSplitGenerator.java:34), com.splunk.mr.SplunkMR$SearchHandler.streamData(SplunkMR.java:809), com.splunk.mr.SplunkMR$SearchHandler.executeImpl(SplunkMR.java:1089), com.splunk.mr.SplunkMR$SearchHandler.execute(SplunkMR.java:906), com.splunk.mr.SplunkMR.runImpl(SplunkMR.java:1804), com.splunk.mr.SplunkMR.run(SplunkMR.java:1553), org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76), org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90), com.splunk.mr.SplunkMR.main(SplunkMR.java:1841)] 04-16-2021 08:59:01.938 ERROR ERP.s3_provider - at com.splunk.roll.util.ConfU.force(ConfU.java:39) 04-16-2021 08:59:01.938 ERROR ERP.s3_provider - at com.splunk.roll.util.ConfU.getRemoteHome(ConfU.java:56) 04-16-2021 08:59:01.938 ERROR ERP.s3_provider - at com.splunk.roll.util.ConfU.getRollRoot(ConfU.java:48) 04-16-2021 08:59:01.938 ERROR ERP.s3_provider - at com.splunk.roll.PathResolver.createV3(PathResolver.java:229) 04-16-2021 08:59:01.938 ERROR ERP.s3_provider - at com.splunk.roll.PathResolver.createWithVersion(PathResolver.java:211) 04-16-2021 08:59:01.938 ERROR ERP.s3_provider - at com.splunk.roll.PathResolver.resolveBuckets(PathResolver.java:152) 04-16-2021 08:59:01.938 ERROR ERP.s3_provider - at com.splunk.mr.input.VirtualIndex.generateSplits(VirtualIndex.java:1644) 04-16-2021 08:59:01.938 ERROR ERP.s3_provider - at com.splunk.mr.input.VirtualIndex.generateSplits(VirtualIndex.java:1609) 04-16-2021 08:59:01.938 ERROR ERP.s3_provider - at com.splunk.mr.input.VixSplitGenerator.generateSplits(VixSplitGenerator.java:62) 04-16-2021 08:59:01.938 ERROR ERP.s3_provider - at com.splunk.mr.input.VixSplitGenerator.generateSplits(VixSplitGenerator.java:34) 04-16-2021 08:59:01.938 ERROR ERP.s3_provider - at com.splunk.mr.SplunkMR$SearchHandler.streamData(SplunkMR.java:809) 04-16-2021 08:59:01.938 ERROR ERP.s3_provider - at com.splunk.mr.SplunkMR$SearchHandler.executeImpl(SplunkMR.java:1089) 04-16-2021 08:59:01.938 ERROR ERP.s3_provider - at com.splunk.mr.SplunkMR$SearchHandler.execute(SplunkMR.java:906) 04-16-2021 08:59:01.938 ERROR ERP.s3_provider - at com.splunk.mr.SplunkMR.runImpl(SplunkMR.java:1804) 04-16-2021 08:59:01.938 ERROR ERP.s3_provider - at com.splunk.mr.SplunkMR.run(SplunkMR.java:1553) 04-16-2021 08:59:01.938 ERROR ERP.s3_provider - ... 3 more 04-16-2021 08:59:02.038 INFO ERP.s3_provider - SplunkMR - finishing, version=6.2 ... 04-16-2021 08:59:02.044 INFO ERP.s3_provider - MetricsSystemImpl - Stopping s3a-file-system metrics system... 04-16-2021 08:59:02.044 INFO ERP.s3_provider - MetricsSystemImpl - s3a-file-system metrics system stopped. 04-16-2021 08:59:02.044 INFO ERP.s3_provider - MetricsSystemImpl - s3a-file-system metrics system shutdown complete. 04-16-2021 08:59:02.069 INFO ERP.s3_provider - MetricsConfig - Loaded properties from hadoop-metrics2.properties 04-16-2021 08:59:02.069 INFO ERP.s3_provider - MetricsSystemImpl - Scheduled Metric snapshot period at 10 second(s). 04-16-2021 08:59:02.069 INFO ERP.s3_provider - MetricsSystemImpl - s3a-file-system metrics system started 04-16-2021 08:59:02.092 ERROR ERP.s3_provider - Error while invoking command: /opt/hadoop/bin/hadoop com.splunk.mr.SplunkMR - Return code: 255 04-16-2021 08:59:02.092 INFO ERPSearchResultCollector - ERP peer=s3_provider is done reading search results.    
Greeting Splunkers: Referring to: eval - Splunk Documentation where: round(X,Y) Returns X rounded to the amount of decimal places specified by Y. The default is to round to an integer. ... See more...
Greeting Splunkers: Referring to: eval - Splunk Documentation where: round(X,Y) Returns X rounded to the amount of decimal places specified by Y. The default is to round to an integer. I am attempting the following: | eval MemoryUtilization = round((memTotalMB - memFreeMB) / memTotalMB * 100),2) I am receiving the following error: Error in 'eval' command: Failed to parse the provided arguments. Usage: eval dest_key = expression. I know I can do it this way: | eval MemoryUtilization = ((memTotalMB - memFreeMB) / memTotalMB * 100) | eval MemoryUtilization = round(MemoryUtilization,2) Is there a way to combine the two eval statements into one? Thank you!
Hi  Question 1) Can someone correct me to build a dashboard for each day , for each build number it much show the build status(i.e) success/fail in a bar graph  Question 2) Same scenario for ... See more...
Hi  Question 1) Can someone correct me to build a dashboard for each day , for each build number it much show the build status(i.e) success/fail in a bar graph  Question 2) Same scenario for weekend aswell (we must have a bar graph where it must show the day on each weekend i.e 17 april,24 april, 1may but the weekend graph must also show the individual day build number and the status For question 1 the query which i have is  |eval status= if(buildstatus="false","success","fail-") |eval weekday=strftime(_time,"%F) |table sats,weekday |stats count as "status" by weekday"
I've got a HTTP API that produces a JSON payload of metrics. The payload is formatted in a way that also works for POSTing (via cURL) to a Splunk HEC and ultimately getting inserted into a "metrics"-... See more...
I've got a HTTP API that produces a JSON payload of metrics. The payload is formatted in a way that also works for POSTing (via cURL) to a Splunk HEC and ultimately getting inserted into a "metrics"-style index. An example of the payload:         { "event": "metric", "time": 1618573805075, "host": "myhostname", "fields": { "metric_name:ok.count": 1, "metric_name:error.count": 2 "product_version": "1.2.3", "now_unix": 1618573805075052, "product_name": "my cool app" } }          This works well and I can query the data using         | mpreview index="my_index_name"         I'm trying to setup Splunk Universal Forwarder and using Scripted Input to cURL this endpoint and send it to the Splunk Indexer over port 9997 as per normal. I can see that the metrics endpoint is being "hit" by SUF, but I can't see any data in Splunk. I have my Splunk-side props.conf as :         [my_json_metrics_via_suf] INDEXED_EXTRACTIONS = json KV_MODE = none         My SUF inputs.conf:         [script:///opt/splunkforwarder/etc/system/bin/my_curl_script.sh] interval = 5 index = my_index_name sourcetype = my_json_metrics_via_suf disabled = false           Does anyone know what config I'm missing? I can see the data arriving at the Splunk server via `tcpdump`
Cisco ASA VPN monitoring: I haven't been able to finish this case for a week now. I need to display a table with information about start and end, and the total time of the user on the network. Wi... See more...
Cisco ASA VPN monitoring: I haven't been able to finish this case for a week now. I need to display a table with information about start and end, and the total time of the user on the network. With the information that I collected and displayed in the table, it was like this:       sourcetype=cisco:asa message_id=722051 OR message_id=113019 OR message_id=722011 OR message_id=722037 OR message_id=722028 OR message_id=722010 zhanali | eval session_info = case((message_id = "113019" OR message_id = "722011" OR message_id = "722010"), "session_end",message_id="722051", "session_start",message_id="722037" OR message_id = "722028","session_close") | eval start = if((session_info = "session_start"),_time,"null") | eval end = if((session_info = "session_end"),_time,"null") | eval close = if((session_info = "session_close"),_time,"null") | table Cisco_ASA_user session_info start end close | convert ctime(start) ctime(end) ctime(close)       I need to somehow leave one line from the close lines that appear sequentially. Better to leave the first in time.        
Hello, I'm faced today with something I do not understand. Here the structure of my event (JSON structured) :   { dateReponse: 1618309228736 dateRequete: 1618309228622 id: 4572d reponse: { dossie... See more...
Hello, I'm faced today with something I do not understand. Here the structure of my event (JSON structured) :   { dateReponse: 1618309228736 dateRequete: 1618309228622 id: 4572d reponse: { dossier: [ { $c: PERSONNE $i: 1 $l: 1 dateCreation: 1477036197000 dateModification: 1495047526000 id: 1 } { $c: IDENTITE $i: 2 $l: 1 dateCreation: 1477036197000 dateModification: 1513858108603 nom: NOM1 prenom: prenom1 } { $c: IDENTITE $i: 3 $l: 1 dateCreation: 1479206837000 dateModification: 1513858108603 nom: NOM2 } ] } }     I'd like to fillnum the field reponse.dossier{}.prenom with "unknown" when not present. The content keep being blank. I tried adding mvexpand and spath (even if it's already json parsed), no luck. | mvexpand reponse.dossier{}.nom | spath input=reponse.dossier{} | fillnull value="unkown" reponse.dossier{}.prenom   I've tried adding a complete new field after reponse.dossier{}.prenom in the fillnull command, it worked just fine for the new field. Still not for my reponse.dossier{}.prenom. I think I missed something, somewhere. Any suggestion ? Thanks in advance, Ema
Hi All, Hope you all are doing good. I am trying to read two simple txt files containing just the numeric value . These files get updated twice every day, morning and evening. I have used same prop... See more...
Hi All, Hope you all are doing good. I am trying to read two simple txt files containing just the numeric value . These files get updated twice every day, morning and evening. I have used same props.conf for both the files.  Splunk is able to read the first txt file properly in the morning and evening, but when it comes to 2nd txt file if their is same type of data present in the morning  than splunk ignores that data in the evening. Example. If in morning in 2nd txt file the value is 1 and in evening the value is 15 than splunk only reads 5 in the evening file. [monitor://C:\test.txt] sourcetype = test ignoreOlderThan = 60d disabled = false crcSalt = <SOURCE> [test] DATETIME_CONFIG=CURRENT SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+) TRUNCATE=100
Hi all, i'm trying to visualize an icon on a table when a specific IP address is found. To do  that i have used the code  and the CSS from Splunk dashboard example app. The table where i have to us... See more...
Hi all, i'm trying to visualize an icon on a table when a specific IP address is found. To do  that i have used the code  and the CSS from Splunk dashboard example app. The table where i have to use the code is like this Time                                    DNS     Count  Site  -------------------                -------    -----       -----  2021-04-16 12:25:25 8.8.8.8 10          Site One  2021-04-16 12:25:25 1.1.1.1 20          Site Two  2021-04-16 12:25:25 8.8.4.4 30          Site Three  code and css as below require([ 'underscore', 'jquery', 'splunkjs/mvc', 'splunkjs/mvc/tableview', 'splunkjs/mvc/simplexml/ready!' ], function(_, $, mvc, TableView) { var CustomIconRenderer = TableView.BaseCellRenderer.extend({ canRender: function(cell) { return _(['DNS']).contains(cell.field); }, render: function($td, cell) { var ip_dns = cell.value; // Compute the icon base on the field value var icon; if(ip_dns != '8.8.8.8' || (ip_dns != '8.8.4.4') { icon = 'alert-circle'; } else { icon = 'check'; } // Create the icon element and add it to the table cell $td.addClass('icon-inline numeric').html(_.template('<%- text %> <i class="icon-<%-icon%>"></i>', { icon: icon, text: cell.value })); } }); mvc.Components.get('table1').getVisualization(function(tableView){ // Register custom cell renderer, the table will re-render automatically tableView.addCellRenderer(new CustomIconRenderer()); }); }); ------ /* Custom Icons */ td.icon { text-align: center; } td.icon i { font-size: 25px; text-shadow: 1px 1px #aaa; } td.icon .severe { color: red; } td.icon .elevated { color: orangered; } td.icon .low { color: #006400; } /* Row Coloring */ #highlight tr td { background-color: #c1ffc3 !important; } #highlight tr.range-elevated td { background-color: #ffc57a !important; } #highlight tr.range-severe td { background-color: #d59392 !important; } #highlight .table td { border-top: 1px solid #fff; } #highlight td.range-severe, td.range-elevated { font-weight: bold; } .icon-inline i { font-size: 18px; margin-left: 5px; } .icon-inline i.icon-alert-circle { color: #ef392c; } .icon-inline i.icon-alert { color: #ff9c1a; } .icon-inline i.icon-check { color: #5fff5e; } /* Dark Theme */ td.icon i.dark { text-shadow: none; } /* Row Coloring */  #highlight tr.dark td { background-color: #5BA383 !important; } #highlight tr.range-elevated.dark td { background-color: #EC9960 !important; } #highlight tr.range-severe.dark td { background-color: #AF575A !important; } #highlight .table .dark td { border-top: 1px solid #000000; color: #F2F4F5; }   When i use as cell.field the column Count no problem, instead using the column DNS the icon is not set. I guess the problem comes from the field type, numeric / not numeric. How can i set icon checking the DNS field (not numeric) ? thanks