All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

We have a foo.csv which will be updated regularly, and we have searches which require some of the data in foo.csv to run properly. I would like to solve this using a macro in the searches, but am hav... See more...
We have a foo.csv which will be updated regularly, and we have searches which require some of the data in foo.csv to run properly. I would like to solve this using a macro in the searches, but am having difficulties. foo.csv   field1,field2,field3 bar11,bar21,bar31 bar12,bar22,bar32 bar13,bar23,bar33     I need "bar11","bar12","bar13" to be inserted to a search, like so:   | pivot fooDM barData min(blah) AS min_blah filter field1 in ("bar11","bar12","bar13")     So I created a macro which (when run alone in a search) gives a quoted comma separated list, myMacro:   [| inputlookup foo.csv | strcat "\"" field1 "\"" field1 | stats values(field1) AS field1 | eval search=mvjoin(field1, ",") | fields search]   The above macro I've attempted both "Use eval-based definition" and not, and place it in search like this:   | pivot fooDM barData min(blah) AS min_blah filter field1 in (`myMacro`)     I would love any help. Thank you!  
I've got a report that is run on a schedule every five minutes. I would like the "latest" to be set to the most recent increment of 5 minutes. This solution used to work but no longer appears to. Doe... See more...
I've got a report that is run on a schedule every five minutes. I would like the "latest" to be set to the most recent increment of 5 minutes. This solution used to work but no longer appears to. Does anyone have any thoughts for how to achieve this? I cannot simply rely on latest=now() because the report certainly will not always run exactly at the correct time. So, I need to be able to snap to the latest 5 minutes so that my counts do not get improperly calculated. Edit: Here is my base search. I'm trying to get latest to snap to the most recent five minute increment. It's not returning any results. index=_internal source=*license_usage.log* type=Usage earliest=-0d@d ([makeresults | eval latest=(floor(now()/300))*300 | fields latest]) However, if I do something like this is does return results. I don't want this ... I was just testing to see if the syntax was messed up or something. The above base search is what I want because it snaps latest to the most recent five minute increment of the hour. index=_internal source=*license_usage.log* type=Usage earliest=-0d@d ([makeresults | eval latest=relative_time(now(), "-m") | fields latest])  Why does relative_time(now(), "-m") work but (floor(now()/300))*300 doesn't?
We have an Enterprise Splunk instantiation that has clustered virtual indexers.  We have been advised that we need real hardware for our indexers to scale up to the size we anticipate.  What areas of... See more...
We have an Enterprise Splunk instantiation that has clustered virtual indexers.  We have been advised that we need real hardware for our indexers to scale up to the size we anticipate.  What areas of performance are affected by having virtualized indexers versus hardware?  
Hi, I have found several locations with a props.conf in my Docker splunk:8.2 image:   ./opt/splunk/etc/apps/legacy/default/props.conf ./opt/splunk/etc/apps/search/local/props.conf ./opt/splunk/etc... See more...
Hi, I have found several locations with a props.conf in my Docker splunk:8.2 image:   ./opt/splunk/etc/apps/legacy/default/props.conf ./opt/splunk/etc/apps/search/local/props.conf ./opt/splunk/etc/apps/search/default/props.conf ./opt/splunk/etc/apps/splunk_internal_metrics/default/props.conf ./opt/splunk/etc/apps/splunk_monitoring_console/default/props.conf ./opt/splunk/etc/apps/sample_app/default/props.conf ./opt/splunk/etc/apps/SplunkLightForwarder/default/props.conf ./opt/splunk/etc/apps/splunk_archiver/default/props.conf ./opt/splunk/etc/apps/splunk_secure_gateway/default/props.conf ./opt/splunk/etc/apps/splunk_rapid_diag/default/props.conf ./opt/splunk/etc/apps/splunk_instrumentation/default/props.conf ./opt/splunk/etc/apps/learned/local/props.conf ./opt/splunk/etc/system/default/props.conf     I noticed, when I add a sourcetype in splunk Enterprise web interface (Settings -> sourcetypes) they will be saved in two locations: apps/search/local/props.conf apps/search/metadata/local.meta I was just wondering, if any of these two would be right location to copy a manually configured props.conf file, or if I should rather add it to /opt/splunk/etc/system/default/props.conf instead? Thanks
We are facing "4xx Client error" intermittently when executing jobs in synthetic hosted agents. When checking the script output, we can validate that the code is running successfully. We can able to ... See more...
We are facing "4xx Client error" intermittently when executing jobs in synthetic hosted agents. When checking the script output, we can validate that the code is running successfully. We can able to see the expected webpage screenshot captured by AppDynamics. Need your help to fix the issue.
Hi,   I would like to know to the commands and procedures for failures happen for splunk 1. What if deployment server failed and where to check the status and command to check through CLI? 2.What... See more...
Hi,   I would like to know to the commands and procedures for failures happen for splunk 1. What if deployment server failed and where to check the status and command to check through CLI? 2.What if cluster master fails and commands to check 3.Where can we check for troubleshooting concepts?   I have done reasearch but nowhere find out the proper solution.please help me out with the topics.      
Hello, This article, https://research.splunk.com/stories/log4shell_cve-2021-44228/ , lists many log4j attack vectors and how Splunk can help detect them. This includes what datamodels to implement/u... See more...
Hello, This article, https://research.splunk.com/stories/log4shell_cve-2021-44228/ , lists many log4j attack vectors and how Splunk can help detect them. This includes what datamodels to implement/use and the SPL. However, the SPL includes various macros. And these macros do not exist on my Splunk implementation. Where do I find these macros? Thanks and God bless, Genesius
Hello experts, I have recently onboarded around 300 windows devices. I have followed the onboarding guide and getting the logs ingested as required but for one field i.e. sourcetype. The source and... See more...
Hello experts, I have recently onboarded around 300 windows devices. I have followed the onboarding guide and getting the logs ingested as required but for one field i.e. sourcetype. The source and sourcetype is updated as below source = WinEventLog:System sourcetype = wineventlog Can someone please help in identifying the issue. Thanks
I have a dashboard with 65 panels (all are parsers). It is taking a while to load. I am having 4 base searches to use across the 65 panels, still I am facing some lag in loading the entire dashboard.... See more...
I have a dashboard with 65 panels (all are parsers). It is taking a while to load. I am having 4 base searches to use across the 65 panels, still I am facing some lag in loading the entire dashboard.  For a better viewability, I categorized the panels based on their utility and showing them in tabs using CSS. For for performance perspective, I dont find any other option other then using Base Search, but it isn't helping me either.  Please provide effective ideas for improving the performance without recuding the number of panels involved.
Hi Team,   We have dashboard setup which has button, on clicking that button it try to execute the function of a python script. we are passing the data through the ajax call. before we did the sp... See more...
Hi Team,   We have dashboard setup which has button, on clicking that button it try to execute the function of a python script. we are passing the data through the ajax call. before we did the splunk 8 upgrade, it was working fine however after the upgrade we are getting the following error :-   error:321 - Masking the original 404 message: 'The path 'xxxx' was not found.' with 'Page not found!' for security reasons in splunk.   i tried searching for this error code but couldn't find much, can you please help   
I have two questions. 1.Is it possible to Stack and unstack in a single column chart? in the below chart the line on top of each bar is the total per stacked column, I want to have the total column... See more...
I have two questions. 1.Is it possible to Stack and unstack in a single column chart? in the below chart the line on top of each bar is the total per stacked column, I want to have the total column first and then the stacked (split-up of total) next. Problem: Since i am not able to do the same i had to add total as overlay  2. How can i show in tooltip  value of a column apart form the value chart shows by default in tooltip      Lets assume i have TotalParts and TotalPartsRunTime, if i plot chart by TotalPartsRunTime then i can see the label TotalPartsRunTime: value for each column/stacked column in tooltip. Along with that i also wanted to show TotalParts: value Problem: When i add TotalParts in result then it is stacked as part of the already stacked column and creates a separate legend for the same, what i wanted to do is just show the TotalParts count in tooltip e.g scenario Application: ABC val_2_B is the total time taken to process val_4: is the total count of val_2_B items that was processed  [expected to show in tooltip and same should not be plotted in chart] Please let me know if i am not clear | makeresults | eval application="FSD", val_1="A", val_2=4839, val_3=5000, val_4=1000 | append [| makeresults | eval application="ABC", val_1="B", val_2=1000, val_3=3215,val_4=2000] | append [| makeresults | eval application="ABC", val_1="E", val_2=478, val_3=4328,val_4=3000] | table application val_1 val_2 val_3 val_4 | sort application | streamstats count by application | eventstats list(val_1) as val_1 by application | foreach val_* [| eval name="copy_<<FIELD>> ".mvindex(val_1,count-1) | eval {name}=<<FIELD>>] | stats values(copy_*) as * by application | fields - val_1*
Hi Team, We are collecting data from Alibaba cloud through a heavy forwarder (using Alibaba add-ons) and pushing the data to our splunk cloud. But what we are seeing is its collecting all data from... See more...
Hi Team, We are collecting data from Alibaba cloud through a heavy forwarder (using Alibaba add-ons) and pushing the data to our splunk cloud. But what we are seeing is its collecting all data from the Alibaba cloud which is huge in size, and upon validating it we realized that below events are making 80% of the whole events and it is not required to us. So we want to exclude below events (rule_result=pass and status=200) from being collected. We know this can be done by editing Props.conf File, but we have been trying it from long for it but we are not successful. Can someone please advise us how to edit this Props.conf file and get these below events (rule_result=pass and status=200) excluded from the heavy forwarder.   index= alibaba source="alibaba:cloudfirewall" rule_result=pass index=alibaba source="alibaba:waf" status=200    
I have a text file that has 8824 lines. I have configured MAX_EVENT= 1000. Then, Splunk only read the file until line 7599. Why Splunk did not read my file till the end of file? Is there anything tha... See more...
I have a text file that has 8824 lines. I have configured MAX_EVENT= 1000. Then, Splunk only read the file until line 7599. Why Splunk did not read my file till the end of file? Is there anything that I need to configure? It is a health check file. This is my BREAK_ONLY_BEFORE = CIC: Node IP Address|Service Status:|\-{10}|\={12}|Summary:|Memory Details on
Hi Splunkers, we have a behavior that we are not able to understand. The problem is the following: we are performing some search using data model and, when we need to use stats, we would select both... See more...
Hi Splunkers, we have a behavior that we are not able to understand. The problem is the following: we are performing some search using data model and, when we need to use stats, we would select both fields row data and datamodel one. However, when we try this, we are not able to see the raw fields. Let me show an example to better explain. If we try this simple search: | from datamodel:"Authentication" | search is_Failed_Authentication=1 | stats count by log_region log_country user we expect that stats show in outoput the 2 fields we manually added to data, log_region and log_contry, and the one owned by datamodel, which is user. Unfortunately, when the result appear we can see only user in the returned table; log_region and log_country are empty. We know the those data are present and populated because, if we try to replicate the search with same time range but using not datamodel, and so using the specific index, sourcetype and source for windows events, the stats return the output with all 3 fields (in this case, user is of course the specific field of Windows events). Is this normal? Is there a way to use both raw/manually added fields and datamodel ones?
Hi, We are using servicenow which has been integrated with Splunk to generate incidents. The current query works fine for single failure but checking to generate incident if a log backup failed rep... See more...
Hi, We are using servicenow which has been integrated with Splunk to generate incidents. The current query works fine for single failure but checking to generate incident if a log backup failed repeatedly for 3 times and then generate an incident. My current eval looks like: | eval itsi_entity=objectName, itsi_event_key=objectId, itsi_correlation_key=objectId, itsi_summary="Backup "+eventStatus+" for "+objectName, message=message, itsi_message="Alerting time: "+human_readable_time+"~~"+field1+"~~"+field2+"~~"+field3+"~~"+field4+"~~"+field5+"~~"+field6+"~~"+field7+"~~"+field8, itsi_impact=case( message like("%Failed log backup of Oracle Database%") ,"High", message like("%Failed backup of Oracle Database%"),"High", true(), "Medium"), itsi_urgency=case( message like("%Failed log backup of Oracle Database%"), "High", message like("%Failed backup of Oracle Database%"), "High", true(), "Medium") I need to have something in itsi_impact case statements for "failed log backup" failed for 3 times then generate high incident. I tried to keep eval and count fields in case statement but not working.
Hello, I am trying to migrate my dashboard to the new Studio version, splunk did it automatically, but the conversion from the xml to json it is not correct at all. I have that piece of code in my ... See more...
Hello, I am trying to migrate my dashboard to the new Studio version, splunk did it automatically, but the conversion from the xml to json it is not correct at all. I have that piece of code in my xml source : <search>     <query>            |makeresults     </query>     <earliest>$tk_range_date.earliest$</earliest>     <latest>$tk_range_date.latest$</latest>     <progress>           <eval token="toearliest">strptime($job.earliestTime$, "%Y-%m-%dT%H:%M:%S.%3N%z")</eval>           <eval token="tolatest">strptime($job.latestTime$, "%Y-%m-%dT%H:%M:%S.%3N%z")</eval>           <set token="jobearliest">$job.earliestTime$</set>            <set token="joblatest">$job.latestTime$</set>     </progress> </search> Where and how include it in my json source? Thank you!
This is the home page for all other dashboard, hence 4 panels are being shown in a row. Each panel drill downs to their specific dashboards. Here on loading of the page the choropleth map is by defa... See more...
This is the home page for all other dashboard, hence 4 panels are being shown in a row. Each panel drill downs to their specific dashboards. Here on loading of the page the choropleth map is by default showing the map of Africa. But the business wants to show the North & South American continents by default on loading of the page. Is it possible to set the view of the map, like this  
New Ship To contact was confirmed after license was sent. Can the new Ship To access the license that was forwarded to him, or is there a way to send the license to the new Ship To contact?  
Hi All, I am new to Splunk so I am not sure if I am doing this right. For one of the use cases, I am trying to use a condition match in the type dropdown to make a column visible when the condition... See more...
Hi All, I am new to Splunk so I am not sure if I am doing this right. For one of the use cases, I am trying to use a condition match in the type dropdown to make a column visible when the condition match and omit the column when it doesn't. Below is the code I am using where I am creating a new token called cache_type_token and setting the value from the input token cache_type. I am unable to understand if there's some issue with the token being set or the match condition.   <input type="dropdown" token="cache_type" searchWhenChanged="true"> <label>Choose which type of cache to view</label> <choice value="*UTable-*">Utilisation Table</choice> <choice value="ContextualLineCache*">Contextual Line Cache</choice> <choice value="GlobalRuleCache*">Global Rule Cache</choice> <choice value="RTLIM*">RTLIM</choice> <choice value="BATCH*">BATCH</choice> <choice value="COB*">COB</choice> <choice value="EOM*">EOM</choice> <choice value="*">ALL</choice> <default>Utilisation Table</default> <change> <set token='cache_type_token'>$cache_type$</set> <condition match=" $cache_type_token$ == &quot;ContextualLineCache*&quot; "> <set token="cache_type_token">0</set> </condition> <condition match=" $cache_type_token$ != &quot;ContextualLineCache*&quot; "> <set token="cache_type_token">cacheExpired</set> </condition> </change> </input>
Hi all! I know ES ships with a TAXII client to ingest threat intel over TAXII. Does anything exist for users who do not have ES? I am trying to ingest intel (in STIX 2.1) being distributed via a... See more...
Hi all! I know ES ships with a TAXII client to ingest threat intel over TAXII. Does anything exist for users who do not have ES? I am trying to ingest intel (in STIX 2.1) being distributed via a TAXII 2.1 server to Splunk. Thanks!