All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, If I try to show the source of an event, splunk shows only "loading ...". I took care, that the result is finalized. We are using Splunk 8.0.8. Edit: I start a search with "index=..." ... See more...
Hello, If I try to show the source of an event, splunk shows only "loading ...". I took care, that the result is finalized. We are using Splunk 8.0.8. Edit: I start a search with "index=..." and "source=.." I open an event -> event actions -> show source  Now the loading screen will not display any result/source, just "loading ...." 
Hi, I have registered for Splunk Phantom Community edition download 3 days ago. However, still the approval is pending and I didn't received link so far. Please let me know how do I get the link for... See more...
Hi, I have registered for Splunk Phantom Community edition download 3 days ago. However, still the approval is pending and I didn't received link so far. Please let me know how do I get the link for download.   Advanced Thanks
Hi, My environment has multiple apps. I got a requirement to default a value to a temp field. While my eval in the search works but eval in the props conf file isn't creating the new field. Please ... See more...
Hi, My environment has multiple apps. I got a requirement to default a value to a temp field. While my eval in the search works but eval in the props conf file isn't creating the new field. Please help me troubleshoot. My conf files are below:     INPUTS ON FORWARDERS: [monitor:///var/log/omega] index=foo_bar_transaction sourcetype=foo_car PROPS ON SHC: [foo_car] EVAL-tempvariable = "Test_Eval" EVAL-datacenter = if(IN(mvindex(split(host,"-"),1),"clc","dkn"),"DANGER",mvindex(split(host,"-"),1)) INDEXER: /opt/splunk/bin/splunk cmd btool props list foo_car --debug /opt/splunk/etc/slave-apps/INFRA_APP_logs/default/props.conf [foo_car] /opt/splunk/etc/system/default/props.conf ADD_EXTRA_TIME_FIELDS = True /opt/splunk/etc/system/default/props.conf ANNOTATE_PUNCT = True /opt/splunk/etc/system/default/props.conf AUTO_KV_JSON = true /opt/splunk/etc/system/default/props.conf BREAK_ONLY_BEFORE = /opt/splunk/etc/system/default/props.conf BREAK_ONLY_BEFORE_DATE = True /opt/splunk/etc/system/default/props.conf CHARSET = UTF-8 /opt/splunk/etc/system/default/props.conf DATETIME_CONFIG = /etc/datetime.xml /opt/splunk/etc/system/default/props.conf DEPTH_LIMIT = 1000 /opt/splunk/etc/system/default/props.conf HEADER_MODE = /opt/splunk/etc/system/default/props.conf LEARN_MODEL = true /opt/splunk/etc/system/default/props.conf LEARN_SOURCETYPE = true /opt/splunk/etc/system/default/props.conf LINE_BREAKER_LOOKBEHIND = 100 /opt/splunk/etc/system/default/props.conf MATCH_LIMIT = 100000 /opt/splunk/etc/system/default/props.conf MAX_DAYS_AGO = 2000 /opt/splunk/etc/system/local/props.conf MAX_DAYS_HENCE = 40 /opt/splunk/etc/system/default/props.conf MAX_DIFF_SECS_AGO = 3600 /opt/splunk/etc/system/default/props.conf MAX_DIFF_SECS_HENCE = 604800 /opt/splunk/etc/system/default/props.conf MAX_EVENTS = 256 /opt/splunk/etc/system/default/props.conf MAX_TIMESTAMP_LOOKAHEAD = 128 /opt/splunk/etc/system/default/props.conf MUST_BREAK_AFTER = /opt/splunk/etc/system/default/props.conf MUST_NOT_BREAK_AFTER = /opt/splunk/etc/system/default/props.conf MUST_NOT_BREAK_BEFORE = /opt/splunk/etc/system/default/props.conf SEGMENTATION = indexing /opt/splunk/etc/system/default/props.conf SEGMENTATION-all = full /opt/splunk/etc/system/default/props.conf SEGMENTATION-inner = inner /opt/splunk/etc/system/default/props.conf SEGMENTATION-outer = outer /opt/splunk/etc/system/default/props.conf SEGMENTATION-raw = none /opt/splunk/etc/system/default/props.conf SEGMENTATION-standard = standard /opt/splunk/etc/system/default/props.conf SHOULD_LINEMERGE = True /opt/splunk/etc/system/default/props.conf TRANSFORMS = /opt/splunk/etc/system/default/props.conf TRUNCATE = 10000 /opt/splunk/etc/system/default/props.conf detect_trailing_nulls = false /opt/splunk/etc/system/default/props.conf maxDist = 100 /opt/splunk/etc/system/default/props.conf priority = /opt/splunk/etc/system/default/props.conf sourcetype =    
Hi, I am trying to calculate age for a task. Time is in below format. What am I missing? | makeresults | eval Last_Checkin="2021-05-26T20:47:22Z" | table Last_Checkin, _time | eval Age_Days=ceil... See more...
Hi, I am trying to calculate age for a task. Time is in below format. What am I missing? | makeresults | eval Last_Checkin="2021-05-26T20:47:22Z" | table Last_Checkin, _time | eval Age_Days=ceiling((now()-strptime(Last_Checkin,"%Y-%m-%dT%H:%M:%S%3NZ"))/86400) | eval CVE_Age=case( Age_Days<30,"A_0 to 29 Days", Age_Days>=365,"G_365 Days+", Age_Days>=180,"F_180 to 364 Days", Age_Days>=120,"E_120 to 179 Days", Age_Days>=90,"D_90 to 119 Days", Age_Days>=60,"C_60 to 89 Days", Age_Days>=30,"B_30 to 59 Days", 0==0,"H_No Age Data")  
Greetings, Would anyone have any recommendations for forwarding events from physically isolated networks to a main network and thus providing a "single pane of glass"?  The networks must be physica... See more...
Greetings, Would anyone have any recommendations for forwarding events from physically isolated networks to a main network and thus providing a "single pane of glass"?  The networks must be physically isolated due to security requirements. Data diode connections are approved. We cannot implement firewalls, thus supporting TCP connections (and typical Splunk forwarding protocols). Here is a reference diagram of the logical architecture. We are looking at utilizing data diodes, which then require UDP connections, which of course, limits our options. We know how to get the traffic across the data diodes from a network configuration standpoint, so it's a matter of what data and how the data is structured so it can be processed correctly by the upstream Splunk Indexer that is the big question. Reference Logical ArchitectureOur biggest issue lies in the fact that UDP ingest on the upstream Splunk server only sees the Heavy Forwarders as the source vs. the original endpoints. We've searched at other forum posts with no luck on how to adapt them if they are even adaptable for this type of scenario, given we are ingesting Windows, Linux, and Syslog from each isolated network. Example: https://community.splunk.com/t5/Getting-Data-In/Keeping-Host-data-when-using-Heavy-Forwarder/m-p/232328 What happens between the Heavy Forwarders and the upstream Splunk Indexer is where we need help. Any and all creative ideas are welcome! For example: UDP packet manipulation inserting a source hostname on the Heavy Forwarders (but how can we determine the original source hostname?) Flat file export on Heavy Forwarder + ingest on upstream Splunk Indexer (but how can we leverage existing Props & Transforms without creating lots of custom Regex?) Is there a better "flat file" method that would support Windows and Linux in addition to Syslog (via SC4S)? Na, forget it, just leave the networks separate We are currently running Splunk Enterprise v8.2.3, on premise. Thank you in advance!
Hi, I am currently using the AWS Add-on for Splunk, and am looking to see if I can blacklist based on regex other than the applications UI for blacklisting based on eventnames. (using the blacklist m... See more...
Hi, I am currently using the AWS Add-on for Splunk, and am looking to see if I can blacklist based on regex other than the applications UI for blacklisting based on eventnames. (using the blacklist method provided by the app: https://docs.splunk.com/Documentation/AddOns/latest/AWS/CloudTrail)   I have a central Cloudtrail for all of my accounts and looking to send logs from a certain account to nullque so they are not ingested. The logs do have a field for AccountID. Reason being the specific logs from the account are about 80 percent of my ingestion and are not needed. I saw this article but as mentioned before I am not able to modify these files directly due to being on Splunk Cloud: https://docs.splunk.com/Documentation/Splunk/6.4.1/Forwarding/Routeandfilterdatad#Discard_specific_events_and_keep_the_rest   Since I do not have access to modify transform.conf or props.conf.  I was told I could modify the applications .conf files and send a zipped folder of the modified contents for Splunk team to upload and install. Currently I do have blacklisting implemented on EventNames as this is part of the application.  Is there any guidance on how I can blacklist based on regex such as accountID=(id for account I want to send to nullque)?
Hello.  We would like to use Splunk Add-on for AWS to ingest CloudTrail data from S3; however, we do not want to allow the entire s3.amazonaws.com namespace at the proxy, but rather the specific s3 ... See more...
Hello.  We would like to use Splunk Add-on for AWS to ingest CloudTrail data from S3; however, we do not want to allow the entire s3.amazonaws.com namespace at the proxy, but rather the specific s3 bucket that contains the data for ingest. What configuration / py files will i need to modify to force Splunk Add-On for AWS to request bucketname.s3.us-east-2.amazonaws.com instead of just s3.amazonaws.com? Thank you!
Hello, I am getting some error messages within my PROPS Configuration file to parse timestamp data. The sample file/event, my props configuration, and error message are giving below. Any help will b... See more...
Hello, I am getting some error messages within my PROPS Configuration file to parse timestamp data. The sample file/event, my props configuration, and error message are giving below. Any help will be highly appreciated. Thank you so much: Sample Event: <?xml version="1.0" encoding="ISO99991"?> <SDWDATA> <MDWDATA> <TIMESTAMP>20110630143000</TIMESTAMP>   <USERTYPE>TEST</USERTYPE> <SESSION>zx530</SESSION> <IPADDR>142.225.163.60</IPADDR> <SYSTEM>CDE</SYSTEM> <EVENTID>NAMEE</EVENTID> <EVENTTYPE>SEARCH</EVENTTYPE> <RETURNCODE>0102</RETURNCODE> <ERRORMSG>None</ERRORMSG> <ESTATCD>1</ESTATCD> <TESTCODE>210</TESTCODE> <FNUMBER>1321</FNUMBER> <OUTPUTCODE>10</OUTPUTCODE> <RCODE>ASDC</RCODE> <NAMECTRL>TESTPWE</NAMECTRL> <USERID>I00XY09</USERID> <ACCESS>ngd</ACCESS> <CAMPUSCODE>p</CAMPUSCODE> <SRCCD>ab31</SRCCD> <SLNR>123456</SLNR> </MDWDATA> <SDWDATA> PROPS: [ __auto__learned__ ] SHOULD_LINEMERGE=true LINE_BREAKER=([\r\n]*)<MODTRANSAUDTRL> TIME_PREFIX=<TIMESTAMP> TIME_FORMAT=%Y%m%d%H%M%S MAX_TIMESTAMP_LOOKAHEAD=14 TRUNCATE=1000   ERROR Message:    
On of the SHC member is going up and down every 5 mins. KV Store is stuck at starting first and then it is stuck at intial sync. The member is running fine from backend and going to up and down in t... See more...
On of the SHC member is going up and down every 5 mins. KV Store is stuck at starting first and then it is stuck at intial sync. The member is running fine from backend and going to up and down in the search head clustering page. I did try splunk restart on that member server and then the KV are rebuilding and getting stuck at each phase first at starting and now at the inital sync. Any help/inputs appreciated @rbal_splunk  Thank you
I have the following JSON: { "kind": "report", "id": { "time": "2021-12-24T15:45:01.331Z", }, "events": [ { "parameters": [ { ... See more...
I have the following JSON: { "kind": "report", "id": { "time": "2021-12-24T15:45:01.331Z", }, "events": [ { "parameters": [ { "name": "field1", "boolValue": true }, { "name": "field2", "boolValue": true }, { "name": "field3", "value": "value3" }, { "name": "field4", "value": "value4" }, { "name": "field5", "boolValue": false }, { "name": "field6", "value": "value6" }, { "name": "field7", "value": "value7" }, { "name": "field8", "boolValue": false }, { "name": "field9", "value": "value9" }, { "name": "field10", "boolValue": false }, { "name": "field11", "boolValue": false } ] } ] }   I'd like to marry the key/value pairing of name/value and name/boolValue with their corresponding values i.e.  field1: true field4: value4 I've attempted to use spath to extract values, but keep coming up short.
I've been looking around here and on Google but can't find an answer to this specific usecase: I have two sourcetypes, both being fed by Salesforce, that have a common Id but the column names are dif... See more...
I've been looking around here and on Google but can't find an answer to this specific usecase: I have two sourcetypes, both being fed by Salesforce, that have a common Id but the column names are different. How do I write a query to join them (or more specifically, augment the data from sourcetype A with some additional fields from sourcetype B)? 1. index=sfdc sourcetype=sfdc:LoginEvent 2. index=sfdc sourcetype=sfdc:User The common field on both is the user's platform GUID (on LoginEvent the column is called UserId, on the User object it's just called Id). My primary source here is LoginEvent. I'm trying coalesce() but getting what looks like a crossjoin (I'm getting users in my table output with no LoginEvent corollary). If I were to write what I'm trying to get in SQL, this is what I'm trying to do (hopefully this makes it easier to understand): SELECT le.EventDate, le.Application, le.LoginType, le.Platform, le.Status, u.Type, u.Location FROM LoginEvent le JOIN User u ON (u.Id = le.UserId) ORDER BY le.EventDate
Here i am having AWS data collecting through IDM on Splunk cloud. I need to route certain data basis on some regex pattern to other index. Is this feasible from GUI. If yes plz suggest. Many thanks i... See more...
Here i am having AWS data collecting through IDM on Splunk cloud. I need to route certain data basis on some regex pattern to other index. Is this feasible from GUI. If yes plz suggest. Many thanks in advance 
Hi team,    I need to fetch the 'InterfaceName' from the below payload.  I built a regular expression but it is not working as expression. Can someone please verify this and correct me where I am w... See more...
Hi team,    I need to fetch the 'InterfaceName' from the below payload.  I built a regular expression but it is not working as expression. Can someone please verify this and correct me where I am wrong.  "xmlns:bw":"http://www.tibco.com" "ns:InterfaceName":"MIAP-COVGBA", "ns:CorrelationID":"f84nvkdwe-4ij43kj" I created one regex expression like - | rex field=_raw "ns:InterfaceName\W+(?<AppId>\w+)" |stats count by AppId.   From the above expression I am getting only double quotes ("), but unable to fetch data.    Thanks in advance.   yours, ...    
I am building a new Splunk environment, and due to the number of clients we have, we are building a simple distributed environment that consists of 1 Heavy Forwarder, universal forwarders all pointin... See more...
I am building a new Splunk environment, and due to the number of clients we have, we are building a simple distributed environment that consists of 1 Heavy Forwarder, universal forwarders all pointing to the Heavy Forwarder, and 1 indexer. I would like the heavy forwarder to only forward certain events on to the indexer. Based upon my research (Route and Filter Data ) I have built the below configurations. outputs.conf [tcpout] indexAndForward = 1 [tcpout:indexerGroup] server = <ip address>:9997 props.conf [default] TRANSFORMS-allNull = setnull [WinEventLog:Security] TRANSFORMS-WinEventLog=setNull,SendToUserAccountIndex,SendToGroupAccountIndex transforms.conf [setnull] REGEX = . DEST_KEY = queue FORMAT = nullQueue [SendToUserAccountIndex] REGEX = ^EventCode=(4720|4722|4725|4738) DEST_KEY = _TCP_ROUTING FORMAT = indexerGroup [SendToGroupAccountIndex] REGEX = ^EventCode=(4727|4730|4728) DEST_KEY = _TCP_ROUTING FORMAT = indexerGroup   Unfortunately, even though the specified events are showing in my Windows Event Logs, and the universal forwarder is forwarding the events properly, the events are not showing in the locally indexed items of forwarded items. It is also not showing on the destination server. If I remove the default stanza from props.conf every event populates, including those events that do not meet the criteria listed above. Based upon this, I believe there is an issue with my source types.  My environment is Splunk Enterprise Version 8.0.3 and and installed on Windows Server 2016. 
Hello all, One of the certificates associated with our universal forwarders is due to expire this week. While we have renewed the certificate, we are yet to push it on all universal forwarders. My q... See more...
Hello all, One of the certificates associated with our universal forwarders is due to expire this week. While we have renewed the certificate, we are yet to push it on all universal forwarders. My question is: if we are unable to push new certificate by weekend will the universal forwarders stop logging the data and sharing it with indexes? Thank you
Hey all, Learning Splunk and I've tried to install a few apps to see how they work but none of them seem to be working. Currently running Splunk Enterprise 8.2.4 installed on Server 2019 and I've tr... See more...
Hey all, Learning Splunk and I've tried to install a few apps to see how they work but none of them seem to be working. Currently running Splunk Enterprise 8.2.4 installed on Server 2019 and I've tried to install "RWI - Executive Dashboard" as well as "Splunk Dashboard Examples" When I open the "Examples" page of the latter, it is a completely blank white page. But the other three pages seem to work (Overview, Dashboards, Search). As for the RWI Dasshboard, if I click the "Continue to app setup page", it also takes me to a blank white screen where I can't do anything. I have tried Internet Explorer as well as Chrome for my browser and updated the server. Is there something simple I am missing to make apps work properly? Thanks!
I have an alert that runs every 10 minutes from 6am-3pm PST.  It checks to see if a file has arrived within the last few minutes (file arrives at 6:10, check for file at 6:12 with 4 minute timeframe)... See more...
I have an alert that runs every 10 minutes from 6am-3pm PST.  It checks to see if a file has arrived within the last few minutes (file arrives at 6:10, check for file at 6:12 with 4 minute timeframe).  The problem is that every day, I get 1-3 false positive alerts.  I get an email saying that a file didn't arrive on time but when I go to perform the exact search with the hardcoded timeframe in question, the file did arrive on time.  Anyone ever run into this problem? Is this an issue with the alert scheduler?  I tried moving the alert back to give the file time to process in the system (file arrives at 6:10, check for file at 6:15 with 8 minute timeframe) but to no avail.
I would like to install TrackMe on Splunk enterprise 7.3.4 and I only versions available for Splunk 8.x  is there available   TrackMe for Splunk enterprise 7.3.4 ?
hi as you can see in my xml, I use an eval command in order to define an health status this eval command is linked to a token time now I would like to correlate the rule of my eval command with th... See more...
hi as you can see in my xml, I use an eval command in order to define an health status this eval command is linked to a token time now I would like to correlate the rule of my eval command with the time token For example, if I choose the "last 7 days" in my time token, the hang has to be > 5 and the crash > 2 But if i choose the "last 30 days" in my time token, the hang has to be > 1 and the crash > 4 how to do this please?   <form theme="dark"> <search id="bib"> <query> index=toto ((sourcetype="hang") OR ( sourcetype="titi") OR (sourcetype="tutu" web_app_duration_avg_ms &gt; 7000)) </query> <earliest>$date.earliest$</earliest> <latest>$date.latest$</latest> </search> <input type="time" token="date" searchWhenChanged="true"> <label>Période</label> <default> <earliest>-7d@h</earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel> <single> <search base="bib"> <query>| stats count(hang_process_name) as hang, count(crash_process_name) as crash by site | eval sante=if((hang&gt;5) AND (crash&gt;2), "Etat de santé dégradé","Etat de santé acceptable") </query> </search>    
Hi, I have a bunch of alerts in my savedsearches.conf. I would like to configure the alert action "Add to triggered alerts" (as is offered when you add the alert using the ui). I am doing this prog... See more...
Hi, I have a bunch of alerts in my savedsearches.conf. I would like to configure the alert action "Add to triggered alerts" (as is offered when you add the alert using the ui). I am doing this programmatically. After restarting splunk, the alerts do not show up as alerts, but rather as reports (in the reports tab). Is this intended behaviour by splunk or am I missing out on something? An example alert can be found below     [generic-alert-name] alert.expires = 120d alert.severity = 2 alert.suppress = 0 alert.track = 1 counttype = number of events cron_schedule = * * * * * description = dispatch.earliest_time = rt-30d dispatch.latest_time = rt-0d display.general.type = statistics display.page.search.tab = statistics enablesched = 1 quantity = 0 relation = greater than request.ui_dispatch_app = my_app request.ui_dispatch_view = my_app search = eventtype = "some-eventtype" | stats count by id | search count >= 4711