All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @GIA Please check this search query (basically I have edited the 4 places(removing the "|")) | tstats summariesonly=true allow_old_summaries=true values(All_Traffic.dest_port) as dest_port values... See more...
Hi @GIA Please check this search query (basically I have edited the 4 places(removing the "|")) | tstats summariesonly=true allow_old_summaries=true values(All_Traffic.dest_port) as dest_port values(All_Traffic.protocol) as protocol values(All_Traffic.action) as action values(sourcetype) as sourcetype from datamodel=Network_Traffic.All_Traffic | search NOT (All_Traffic.src_ip [| inputlookup internal_ranges.csv ]) AND (All_Traffic.dest_ip [| inputlookup internal_ranges.csv ]) AND (All_Traffic.action="allow*") by _time All_Traffic.src_ip All_Traffic.dest_ip | `drop_dm_object_name(All_Traffic)` | lookup ip_iocs.csv ioc as src_ip OUTPUTNEW last_seen | append [| tstats summariesonly=true allow_old_summaries=true values(All_Traffic.dest_port) as dest_port values(All_Traffic.protocol) as protocol values(All_Traffic.action) as action values(sourcetype) as sourcetype from datamodel=Network_Traffic.All_Traffic where (All_Traffic.src_ip IN [| inputlookup internal_ranges.csv ]) AND NOT (All_Traffic.dest_ip IN [| inputlookup internal_ranges.csv ]) AND NOT (All_Traffic.protocol=icmp) by _time All_Traffic.src_ip All_Traffic.dest_ip | `drop_dm_object_name(All_Traffic)` | lookup ip_iocs.csv ioc as dest_ip OUTPUTNEW last_seen] | where isnotnull(last_seen) | head 51   to learn lookup commands, pls check https://docs.splunk.com/Documentation/Splunk/9.1.2/SearchReference/lookup the lookup, subsearch, append, tstats and datamodels... bit complex topics and it may take a long time for you to understand. pls dont loose hope. keep on learning, daily, bit by bit. hope you got it, thanks.   
Hi @indeed_2000 , yes probably parallelizing data in more than one index it's a good idea for increasing speed of search, but having an index for day it's a little exagerate and very difficoult to m... See more...
Hi @indeed_2000 , yes probably parallelizing data in more than one index it's a good idea for increasing speed of search, but having an index for day it's a little exagerate and very difficoult to manage! Eventually divide your data in two or three indexes, but not more! Ciao. Giuseppe  
I see that Splunk 9.0.4 might still have compatibility with Linux kernel 2.6 "linux-2.6". Despite Splunk's documentation indicating the deprecation of kernel 2.6 since Splunk Enterprise 8.2.9, the RP... See more...
I see that Splunk 9.0.4 might still have compatibility with Linux kernel 2.6 "linux-2.6". Despite Splunk's documentation indicating the deprecation of kernel 2.6 since Splunk Enterprise 8.2.9, the RPM package name ("splunk-9.0.4-de405f4a7979-linux-2.6-x86_64.rpm") for Splunk 9.0.4 suggests it is still built on and for this kernel version. Additionally, RHEL 6, which uses kernel 2.6, remains in extended support until June 30, 2024. This means Splunk 9.0.4 "can run" on kernel 2.6 under RHEL 6 which is still under extended supported. (wow!, 18 years of support for 2.6 kernel from RHEL) "https://download.splunk.com/products/splunk/releases/9.0.4/linux/splunk-9.0.4-de405f4a7979-linux-2.6-x86_64.rpm" The Splunk support page might not reflect this compatibility, so even though Splunk 9.0.4 package name might indicate that its built on linux kernel 2.6 and RHEL 6 is still under extended support, You might have a hard time making your case to your boss that this is a good idea, let alone to Splunk Support on this configuration. Cheers, Eddie
Also if timestampOfReception is the main timestamp of the event, it should be properly parsed as _time field of the event. It makes searching the events much, much quicker.
While technically one could think of a solution writing dynamically to a different index each day (you'd still need to have pre-created indexes though), Splunk is not elasticsearch so don't bring hab... See more...
While technically one could think of a solution writing dynamically to a different index each day (you'd still need to have pre-created indexes though), Splunk is not elasticsearch so don't bring habits from there over to Splunk. Splunk works differently and has other methods than elastic of storing data, indexing it and searching. And the summary indexing thing - well, if you want to summarize something, you must have something to summarize. So you're always summarizing over some period of time. That conflicts with summarizing "realtime". Anyway, even if you had the ability of creating some summary for - let's say - sliding window of 30 minutes "backwards", as soon as you'd summarize your data, that summary would be invalid due to new data incoming. So it makes no real sense.
There is a setting for roles in Splunk that configures what indexes are searched by default if an index is not specified in the search itself. This would be my guess is what is going on here if I und... See more...
There is a setting for roles in Splunk that configures what indexes are searched by default if an index is not specified in the search itself. This would be my guess is what is going on here if I understood your question correctly. The user's role that is utilizing the macro probably doesn't have the index set as a default searched index where the data resides. Here is a screenshot of the UI settings for roles default searched indexes.  
Hi @gcusello in each day there are lots of datapoints that’s why i need to break it down into the multiple index, probably it will increase speed of search like sharing mechanism in elasticsearch or ... See more...
Hi @gcusello in each day there are lots of datapoints that’s why i need to break it down into the multiple index, probably it will increase speed of search like sharing mechanism in elasticsearch or influxdb.
Hello, My local Splunk IP address is 127.0.0.1:514. I enabled remote logging  on my endpoint and entered the above address to my endpoint (sys log ) logging remote log server address/ but I'm not re... See more...
Hello, My local Splunk IP address is 127.0.0.1:514. I enabled remote logging  on my endpoint and entered the above address to my endpoint (sys log ) logging remote log server address/ but I'm not receiving the logs from endpoint to the Splunk, any advice? please. Thanks
Only exception to don’t edit files inside default folder is, when you have created your own app. Then you must edit default files before deploying it to splunk. But then with gui you edit again only ... See more...
Only exception to don’t edit files inside default folder is, when you have created your own app. Then you must edit default files before deploying it to splunk. But then with gui you edit again only local files. 
Thanks for the reply! Indeed, I have already converted the file into the metrics CSV format of a separate row per timestamp per metric. That works and I can ingest the data. However, it increases ... See more...
Thanks for the reply! Indeed, I have already converted the file into the metrics CSV format of a separate row per timestamp per metric. That works and I can ingest the data. However, it increases the file size from 92MB to 1.4GB so very VERY wasteful to be sure. I will work the problem some more and see what I can do.
Hi Community People. Our team has stood up a new instance of Splunk, and we have deployed out some cool new apps. One issue I have run into however is that there seems to be a weirdness in how the a... See more...
Hi Community People. Our team has stood up a new instance of Splunk, and we have deployed out some cool new apps. One issue I have run into however is that there seems to be a weirdness in how the app is expecting the data. Specifically, the predefined queries (some using macros) seem to not work, unless there is an index specified. Is there an explanation behind this?           sourcetype=[some preconfigured type from the app] | stats count by someField <===doesn't seem to work index=someIndex sourcetype=appDefinedSourceType | stats count by someField <===this works          
The number of rows is not an issue and Splunk regularly handles files much larger than 92MB. The TRUNCATE setting applies to events, not headers. Since you're using Python now, consider a scripted ... See more...
The number of rows is not an issue and Splunk regularly handles files much larger than 92MB. The TRUNCATE setting applies to events, not headers. Since you're using Python now, consider a scripted input to read the file and convert it to k=v format.
Thank you, that makes total sense!
The timechart command will produce zero results that pad the X-axis and should give the desired results. <chart depends="$show_chart_terminations$,$timeEarliest$,$timeLatest$"> <search> <query... See more...
The timechart command will produce zero results that pad the X-axis and should give the desired results. <chart depends="$show_chart_terminations$,$timeEarliest$,$timeLatest$"> <search> <query>... | timechart count BY some_field</query> <earliest>$time.earliest$</earliest> <latest>$time.latest$</latest> </search> <option name="charting.axisX.minimumNumber">$timeEarliest$</option> <option name="charting.axisX.maximumNumber">$timeLatest$</option> <option name="charting.chart">column</option> </chart>  
    Hi, I have a dashboard with time picker and a dummy search to transform relative timestamps to absolute timestamps:   <search> <query>| makeresults</query> <earliest>$time.earliest$</ear... See more...
    Hi, I have a dashboard with time picker and a dummy search to transform relative timestamps to absolute timestamps:   <search> <query>| makeresults</query> <earliest>$time.earliest$</earliest> <latest>$time.latest$</latest> <progress> <eval token="timeEarliest">strptime($job.earliestTime$,"%Y-%m-%dT%H:%M:%S.%3N%z")</eval> <eval token="timeLatest">strptime($job.latestTime$,"%Y-%m-%dT%H:%M:%S.%3N%z")</eval> </progress> </search>   Next, I have a chart querying something using the timepicker from the form. Per default, the chart will automatically adjust the X-Axis to the results found, not the entire searched timespan. I want to change this behavior and tried setting chart.axisX to the absolute timestamp values, but it doesn't seem to work. Is there something that I am missing?   <chart depends="$timeEarliest$,$timeLatest$"> <search> <query>... | chart count OVER _time BY some_field</query> <earliest>$time.earliest$</earliest> <latest>$time.latest$</latest> </search> <option name="charting.axisX.minimumNumber">$timeEarliest$</option> <option name="charting.axisX.maximumNumber">$timeLatest$</option> <option name="charting.chart">column</option> </chart>          
Hi @indeed_2000, avoid to create an index for each day, Splunk isn't a database and an index isn't a table: you should create a different index when you have different retentions or different access... See more...
Hi @indeed_2000, avoid to create an index for each day, Splunk isn't a database and an index isn't a table: you should create a different index when you have different retentions or different access grants. You can search in an index using the timestamp even if it's only one index, you don't need to have a different index by day! The update frequency of a summary index depends on the scheduled search you are using, as the name itself says, it's scheduled, so you have to give a schedule frequency, that can also be very frequent, depending on the execution time of the search itself: so e.g. the scheduled runs in 30 seconds, you can schedule it every minute, but I don't hint to run too frequently, becsuer you could have skipped searches. Also running a search in Real Time it's possible but it requires many resources, so avoid it. Ciao. Giuseppe 
Need to create summary index continuously realtime, now have two questions: 1-run splunk forwarder on client and logs send to splunk server, in each lines lots of data exist so need to create the su... See more...
Need to create summary index continuously realtime, now have two questions: 1-run splunk forwarder on client and logs send to splunk server, in each lines lots of data exist so need to create the summary index as soon as log received and store summary of line on that summary index continuously realtime.   2-is it possible Automatically Create new index for each day like this myindex-20240115, myindex-20240116, as data comings from forwarder?    Thanks
Datetime calculations such as finding the difference should be done with epoch times so rather than formatting now() you should be parsing timestampOfReception using strptime() so you can subtract on... See more...
Datetime calculations such as finding the difference should be done with epoch times so rather than formatting now() you should be parsing timestampOfReception using strptime() so you can subtract one from the other.
| tstats summariesonly=true allow_old_summaries=true values(All_Traffic.dest_port) as dest_port values(All_Traffic.protocol) as protocol values(All_Traffic.action) as action values(sourcetype) as sou... See more...
| tstats summariesonly=true allow_old_summaries=true values(All_Traffic.dest_port) as dest_port values(All_Traffic.protocol) as protocol values(All_Traffic.action) as action values(sourcetype) as sourcetype from datamodel=Network_Traffic.All_Traffic |search NOT (All_Traffic.src_ip [|inputlookup internal_ranges.csv ])    Thank you so much!! Your second option yielded the correct results. Do you mind helping me with the other portion please? also, if you know where I can find documentation that can better help me understand lookups for these kinds of searches, that would be appreciated. I am new with Splunk and as much as I like it, I find it challenging at times. 
Hi Team i had provided user roles  has Read only access. but user having edit and delete the reports, how to restrict  the user access as Read only access for the reports, The below had provided ... See more...
Hi Team i had provided user roles  has Read only access. but user having edit and delete the reports, how to restrict  the user access as Read only access for the reports, The below had provided  configuration and Roles capablities  had given below.   Please help me  how to restrict the user  access.  User not able to delete option  and not able edit Splunk Queries.     [savedsearches/krack_delete] access = read : [ * ], write : [ power ] export = system owner = vijreddy@xxxxxxxxxxxxx.com version = 9.0.5.1 modtime = 1704823240.999623300