All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi guys, I'm trying to isolate what is being responsible for most of the data size on phantom. My data/db/base folder is huge and it keeps growing even though the logging level is really low and the... See more...
Hi guys, I'm trying to isolate what is being responsible for most of the data size on phantom. My data/db/base folder is huge and it keeps growing even though the logging level is really low and the vault is not something that I often use. /opt/phantom/data]$ du -hsx * | sort -rh | head -10 | grep db 2.6T db Is there any way for me to query and see what is consuming much space and maybe delete some old stuff? I know that Phantom has those scripts to remove containers and etc but I personally don't think containers are the bad guys in this context, and the way it is I don't have like double the space available to do a Vacuum if I delete them all. Thanks!
Hey All, I am trying to onboard crowdstrike fdr logs using splunk addon Splunk Add-on for CrowdStrike FDR - Splunk Add-on for CrowdStrike FDR | Splunkbase  I want to enrich the aidmaster logs. I wa... See more...
Hey All, I am trying to onboard crowdstrike fdr logs using splunk addon Splunk Add-on for CrowdStrike FDR - Splunk Add-on for CrowdStrike FDR | Splunkbase  I want to enrich the aidmaster logs. I want to show ComputerName using aid in splunk logs. I have installed addd-on on forwarder and configured input as below:   With this input configuration, we can see fdr logs, but the event coverage for ComputerName is very less - 0.36. In short, we are not able to get ComputerName information for aids properly. I have few queries: 1) Do I need some changes on SH as well? 2) Do I need to make any change in savedsearches.conf of the add-on for the ComputerName to be shown? Thanks in advance!
CAN I ADD FEDERATED SEARCH AS ROOT SEARCH IN DATASETS? I WAS ABOUT TO CREATE A DATAMODEL FOR A DASHBOARD WITH MULTIPLE SPLUNK DEPLOYMENT. HOWEVER, WHEN I USED FEDERATED INDEXES IN DATASETS. I GET... See more...
CAN I ADD FEDERATED SEARCH AS ROOT SEARCH IN DATASETS? I WAS ABOUT TO CREATE A DATAMODEL FOR A DASHBOARD WITH MULTIPLE SPLUNK DEPLOYMENT. HOWEVER, WHEN I USED FEDERATED INDEXES IN DATASETS. I GET THE SEARCH COUNT. I WAS ABLE TO ADD FIELDS. HOWEVER, I WON'T GET ANY SEARCH RESULTS IN ACTUAL SEARCH USING DATAMODEL   | datamodel test search  no results !   no results!     
I have created a lookup file(.csv) file with known exception. My question is how to create a search that can look against that known exceptions. I want to get results only when a different exceptio... See more...
I have created a lookup file(.csv) file with known exception. My question is how to create a search that can look against that known exceptions. I want to get results only when a different exception appear(exception that never happened before), also i want to make an alert. When i tried to make an search to look against that file the results will also shows exceptions that are in .csv file. How can i do this? Is there another better option? My .csv file look like this(140 exceptions), also my environment :    
Hi  I have an issue with alerting and its not working anymore, what am i doing wrong?   My Query: index="content" source="catalina.out" "org.apache.catalina.startup.Catalina.start Server star... See more...
Hi  I have an issue with alerting and its not working anymore, what am i doing wrong?   My Query: index="content" source="catalina.out" "org.apache.catalina.startup.Catalina.start Server startup" NOT Caesium | rex field=_raw "(?ms)^(?P<boot_end>\\d+\\-\\w+\\-\\d+\\s+\\d+:\\d+)(?:[^ \\n]* ){7}(?P<boot_time>\\d+)" offset_field=_extracted_fields_bounds | eval epoch_time = _time | eval boot_sec = boot_time * 0.001 | eval boot_min = boot_sec/60 | eval sub_time = epoch_time - boot_sec | eval human_epoch_time = strftime(epoch_time,"%y-%m-%d %H:%M:%S") | eval human_sub_time = strftime(sub_time,"%y-%m-%d %H:%M:%S") | table human_epoch_time boot_sec boot_min human_sub_time host   Output: I am not getting the duration anymore :Alert email that i am getting doesnt contain duration , initiated at :  application has been started on node host. Start Up Initiated at . Start Up Completed at 23-04-27 07:46:12 . Start Up Duration is minutes . human_epoch_time boot_sec boot_min human_sub_time host 23-04-27 07:46:12       host
Hi , Am trying to join 2 lookups. when I run them individually they are fine but I use the join command it takes forever.  Is there a better and more efficient way to join them.  The query used... See more...
Hi , Am trying to join 2 lookups. when I run them individually they are fine but I use the join command it takes forever.  Is there a better and more efficient way to join them.  The query used is below. Thanks   | inputlookup compliance_data_high_severity.csv |join type=inner max=0 [ |inputlookup KononKV_system |where isnotnull(devices) |eval devices=split(devices, "|delim|") |eval data=split(data, "|delim|") |mvexpand devices |spath input=devices "IP Address" output=ip |spath input=devices "Component Type" |spath input=devices "Operating System" |spath input=data "System Acronym" `is_server("Operating System", "Component Type", is_server)` |search is_server="*" |fields ip "Operating System" "Component Type" ]  
I see there is a limit in the inputs.conf (1-9) but what about the serverclass.conf file? There is no mention in the documentation.  https://docs.splunk.com/Documentation/Splunk/9.0.4/Admin/Inputsco... See more...
I see there is a limit in the inputs.conf (1-9) but what about the serverclass.conf file? There is no mention in the documentation.  https://docs.splunk.com/Documentation/Splunk/9.0.4/Admin/Inputsconf 
Hello, Every time I update my `eventgen.conf`, I need to restart my Splunk instance for the changes to take effect. Is there a better method that could do this automatically, any setting ? I didn... See more...
Hello, Every time I update my `eventgen.conf`, I need to restart my Splunk instance for the changes to take effect. Is there a better method that could do this automatically, any setting ? I didn't find anything in the doc about it.
Hi There!     I need to add a specific field in the lookup and keeping the all old data as it is,    I'm having lookup1_kvstore , which consists of several fields and check_date as well    also ... See more...
Hi There!     I need to add a specific field in the lookup and keeping the all old data as it is,    I'm having lookup1_kvstore , which consists of several fields and check_date as well    also lookup2.csv consists of check_date, src_name , I need to update check_date from lookup2.csv to lookup1_kvstore and keeping all the old fields as it is except check_date.   Thanks in Advance!
Hello everyone; I want to build a dashboard for specific accounts. I need to keep track of 20 reports to see if they are logging in. I ran a statistics count and can see all the charges, but when I p... See more...
Hello everyone; I want to build a dashboard for specific accounts. I need to keep track of 20 reports to see if they are logging in. I ran a statistics count and can see all the charges, but when I put this into a single value visualization, a count for all the logs appears; thus, it counts each record for the accounts, and the result is displayed as millions. I should only have a maximum of 20. Can the number show if it sees an account rather than an account and every associated log? Many thanks
Hi, I know that there is an article about fschange depreciated, but it's still exists in splunk 9.x.x. I was wondering if anyone has this issue where the host = $decideOnStartup when using fschan... See more...
Hi, I know that there is an article about fschange depreciated, but it's still exists in splunk 9.x.x. I was wondering if anyone has this issue where the host = $decideOnStartup when using fschange, and possible how to fix it. This only happened to fschange for me.  Also, any insight on alternatives to fschange is appreciated it. [fschange:/root/somefile] disabled = false index = fs_change recurse = true pollPeriod = 30 sourcetype = fs_change   thank you.
Hi All, I am searching for data in index for searches which users executed with time range "All Time".   index=_audit search_et="N/A" search_lt="N/A" user!="splunk-system-user"   I got even... See more...
Hi All, I am searching for data in index for searches which users executed with time range "All Time".   index=_audit search_et="N/A" search_lt="N/A" user!="splunk-system-user"   I got events with following fields: - info has_error_warn fully_completed_search total_run_time event_count result_count avaialble_count scan_count drop_count exec_count api_et api_lt api_index_et api_index_lt is_realtime search_statup_time is_prjob searched_buckets eliminated_buckets considered_events total_slices decompressed_slices duration.command.search_index And many others. I need your help and guidance on seeking details about the fields fetched by the _audit index. Thank you
Hi, I need to set a condition in Splunk for how the business quarters are set up in my place of work. In my job, the new financial year starts each year on February 1st. The 1st quarter is be... See more...
Hi, I need to set a condition in Splunk for how the business quarters are set up in my place of work. In my job, the new financial year starts each year on February 1st. The 1st quarter is between February to April inclusive, the 2nd quarter is May to July inclusive, the 3rd quarter is August to October inclusive, and the 4th quarter is November to January inclusive. Currently I have this Splunk query: index=_internal earliest=-1y latest=now | eval month=strftime(_time, "%m") | eval quarter = case( month>=2 AND month<=4, "Q1", month>=5 AND month<=7, "Q2", month>=8 AND month<=10, "Q3", month>=11 OR month<=1, "Q4" ) | eval year = if(month>=2, strftime("%y", relative_time(now(), "@y")+1."y"), strftime("%y", now())) | eval quarter = "FY" . year . quarter However, I am receiving the following error: Error in 'eval' command: The expression is malformed. Expected ).   How can I set a new field called "Quarter" with this information  this in a Splunk command or query? Many thanks as always!
Hello,  I would like to save one of my graph from dashboard Studio in order to be able to reuse it in an other one. Does this function still exist in the new version? I tried to find an anwer onl... See more...
Hello,  I would like to save one of my graph from dashboard Studio in order to be able to reuse it in an other one. Does this function still exist in the new version? I tried to find an anwer online but nobody talks about that.  Thank for you reply.
Hello, im looking to compare a count of servers that was reporting into splunk this week and compare to the amount that reported the week before. Im using the Dashboard studio for my visualisations. ... See more...
Hello, im looking to compare a count of servers that was reporting into splunk this week and compare to the amount that reported the week before. Im using the Dashboard studio for my visualisations.  My current query shows how many servers reported per day over the week: an represents the server name. index="siem-ips" cim_entity_zone="global" | timechart dc(an)   How can i modify this to show what the previous week looked like over the current week?    Thank you
Hi all I know that other people have asked similar questions but I have had no success in replicating their use cases. I am trying to display a timechart with lines showing sales for multiple store... See more...
Hi all I know that other people have asked similar questions but I have had no success in replicating their use cases. I am trying to display a timechart with lines showing sales for multiple stores, broken down by region and then city. For example, Region A, has Cities A, B and C, Region B also has Cities A, B and C but inside each of those cities, there are between 2 and 5 stores. So when we click on a selector at the top, to select Region A for example, I need to show a trellis, broken out by city, showing a timechart with lines representing the sales for each store over the past say 6 months. Hopefully I am explaining this well enough Thanks
Hi There!     I'm having the query, In the station_check_kvstore lookup , the field check_date consists of 180 values for a single src_name, we are having many src_name     For instance,  when sr... See more...
Hi There!     I'm having the query, In the station_check_kvstore lookup , the field check_date consists of 180 values for a single src_name, we are having many src_name     For instance,  when src_name = 51363 , check_date consists of 180 values, If we are using this query, it fetched only 100 values of check_date for a single src_name, we need all values or the latest date in check_date, Solution for both ways is welcome,   | inputlookup check_kvstore | search src_name = 51363 | lookup station_check_kvstore src_name Email OUTPUT check_date | table src_name Email check_date   Thanks is Advance!!
I have signed for the trial of splunk cloud and struggling to find how to ingest the data from AWS account. The splunk add on for AWS shows a recommendation that I should be using "Data Manager for S... See more...
I have signed for the trial of splunk cloud and struggling to find how to ingest the data from AWS account. The splunk add on for AWS shows a recommendation that I should be using "Data Manager for Splunk Cloud". The documentation shows it should be available in the menu. I can see all other entries but can not see the "Data Manager for Splunk Cloud". Do I have to do any settings for this to be available or else?
I have a search and resultant output like shown below: search is --> eventtype=cacti:mirage host=onl-cacti-02 rrdn=traffic_in host_id IN (215) ldi IN (9069,9070,9071,9073,9074,9075,9077,9078,9079) ... See more...
I have a search and resultant output like shown below: search is --> eventtype=cacti:mirage host=onl-cacti-02 rrdn=traffic_in host_id IN (215) ldi IN (9069,9070,9071,9073,9074,9075,9077,9078,9079) hostname IN (slrmpqfh-c1mpt-01-owmlb01) | reverse | streamstats current=t window=2 global=f range(_time) as deltaTime range(rrdv) AS rrd_value_delta by name_cache | eval isTraffic = if(like(rrdn,"%traffic%"),1,0) | eval kpi = if(isTraffic==1,rrd_value_delta*8/deltaTime/1024/1024/1024,rrd_value_delta/deltaTime) | noop feature_flag=stats:allow_stats_v2:false | timechart span=5m limit=0 useother=f list(kpi) as kpi by name_cache What I want to have are new fields fe01, fe02 and so on that would give the percentage change in value for each of the fields. I know timechart is not the command to use here. I tried eventstats and streamstats but wasn't able to do what I wanted. Each ldi in my search corresponds to a unique name_cache. and host_id corresponds to a unique hostname. It is easy to filter data via ldi and host_id fields than typing long and complicated name_cache and hostname fields. e.g., the new field FE01 should be like ((1.21-1.33)/1.21-(1.21-1.33))*100 and same formula for other fields. It means, new fields FE01, FE02 and so on would show the traffic change percent. @somesoni2 @andrewtrobec @lakshman239 @efavreau @phanTom @diogofgm @woodcock