All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

How to split/extract substring before the first - from the right side of the field on splunk search For ex: My field hostname contains Hostname = abc-xyz Hostname = abc-01-def Hostname = pqr-01 ... See more...
How to split/extract substring before the first - from the right side of the field on splunk search For ex: My field hostname contains Hostname = abc-xyz Hostname = abc-01-def Hostname = pqr-01 I want to see like below . abc abc-01 pqr Please help me.
Hello, I am trying to settle on a new AWS event collection strategy.  We are currently collecting using the older pull (SQS/SNS) method, and would like to move to a more modern and flexible way of d... See more...
Hello, I am trying to settle on a new AWS event collection strategy.  We are currently collecting using the older pull (SQS/SNS) method, and would like to move to a more modern and flexible way of doing it.  Would like to collect AWS Config, CloudTrail, VPC Flow logs, CloudWatch, GuardDuty events from 100+ accounts into Splunk.  Also would like a filtering capability at the source where logs can be discarded based on some criteria (account or arn) and not sent to Splunk to be filtered.    Seems like Splunk has changed their recommendations in the last few years (lambda push, firehouse, etc)_, and I am not certain what is the recommended approach now to do this now with as little complexity as possible.  Project Trumpet seems like a good option, but  I am not seeing Splunk steer people to that.   Also unclear what caveats are with each of these approaches.  If you go the Firehose route, how do you discard unwanted events?  Also there are cost considerations and it's unclear which approach is more cost effective. Wondering what people have settled on in similar circumstances and why.  Thanks!
I am trying to implement EventingCommand interface, and return just one custom event at the end of  processing multiple events in Splunk. I have the code written in Python and integrated. But for som... See more...
I am trying to implement EventingCommand interface, and return just one custom event at the end of  processing multiple events in Splunk. I have the code written in Python and integrated. But for some reason, the code returns multiple events in Splunk. Can someone point out what is the problem here?     import sys from splunklib.searchcommands import dispatch, EventingCommand, Configuration @Configuration()     class testpython(EventingCommand):         def transform(self, records):         list1 =[{'count': 1}]         return list1 if __name__ == "__main__": dispatch(testpython, sys.argv, sys.stdin, sys.stdout, __name__)
Working on a Windoze box with limited resources - only IE and that doesn't seem to work with splunk.com . I can do wget - what is the specific URL to wget for the 8.1.3 agent for windows ? Don't wa... See more...
Working on a Windoze box with limited resources - only IE and that doesn't seem to work with splunk.com . I can do wget - what is the specific URL to wget for the 8.1.3 agent for windows ? Don't want to mess with IE as that would involve security and I would rather not point out this hole until after I have the forwarder downloaded and installed.
My Splunk forwarder is running as a splunk user and not root. What is the best way to grant this user read access to user's .bash_history logs without enforcing sudo? If I am not mistaken, theres no ... See more...
My Splunk forwarder is running as a splunk user and not root. What is the best way to grant this user read access to user's .bash_history logs without enforcing sudo? If I am not mistaken, theres no way for us to tell the splunk forwarder to run sudo and supply with its own creds again. Any guidance will be very appreciated.
We have 90+ lookups to migrate from a 6x Splunk cluster to a new 8x cluster.  How can this be done in bulk?
I am using Splunk add-on for ServiceNow in my ITSI instance. I have configured Create SNOW incident action for the episode which is successfully creating incident in ServiceNow. As a next step I wan... See more...
I am using Splunk add-on for ServiceNow in my ITSI instance. I have configured Create SNOW incident action for the episode which is successfully creating incident in ServiceNow. As a next step I wan to inform the operations team about the recently created incident so I have configured another action for the same episode to send email. But I dont know how I can get the number of the recently created incident which I can send in the email subject line? Can anyone guide me on this?    
Hi all, I configured the addon in the subject for Azure Firewall log retrieving, using Log Analytics. It works; however, the CPU usage on the server where Splunk is installed and configured is alwa... See more...
Hi all, I configured the addon in the subject for Azure Firewall log retrieving, using Log Analytics. It works; however, the CPU usage on the server where Splunk is installed and configured is always 100%. Does anyone experienced the same? How can I fix?   Thanks!
how to use horseshoe meter for below query index = *   | table podname cluster status | dedup podname cluster status |  eval A=if(like(status,"%True"), "1", "0")  | stats count(eval(A=1)) as success... See more...
how to use horseshoe meter for below query index = *   | table podname cluster status | dedup podname cluster status |  eval A=if(like(status,"%True"), "1", "0")  | stats count(eval(A=1)) as success  | stats count(eval(A=0))  as failure I want to display horseshoe meter in splunkdashboard like below   success/failure   (3/0) if there failure count increased like (2/1) then change the meter color , Can anyone please suggest how i can achieve this          
Hi The database connection gets disabled after some connection failures - done for normal periodic db maintenance - although the auto_disable has been set to "false". The later was supposed to sto... See more...
Hi The database connection gets disabled after some connection failures - done for normal periodic db maintenance - although the auto_disable has been set to "false". The later was supposed to stop the auto-disabling. App DB Connect vrs is 2.4 the DB Input entry is shown below - (Zs used for covering some private content):       [mi_input://ZZZZZ] connection = ORA_LIVE description = zzzzz enable_query_wrapping = 1 index = db_oracle input_timestamp_column_fullname = (003) NULL.SYSTEM_START_TIME.DATE input_timestamp_column_name = SYSTEM_START_TIME interval = 60 max_rows = 10000 mode = tail output_timestamp_format = yyyy-MM-dd HH:mm:ss query = zzzzz source = zzzzz sourcetype = oracle:fcc:audit tail_rising_column_checkpoint_value = 61393439 tail_rising_column_fullname = (001) NULL.SEQUENCE_NO.NUMBER tail_rising_column_name = SEQUENCE_NO ui_query_catalog = NULL ui_query_mode = advanced ui_query_schema = ZZZZZ ui_query_table = ZZZZZ auto_disable = false disabled = 0       Can someone advise how to keep the DB Input enabled whatever the number of failures ? best regards Altin
Hello, it is possible to generate notables only based on number of matched events?  Example, if the correlation search more than 20 events, it will not generate the events, instead it will send an e... See more...
Hello, it is possible to generate notables only based on number of matched events?  Example, if the correlation search more than 20 events, it will not generate the events, instead it will send an email notification (if possible). If it did not reached more than 20 events, it will generate the notables. 
Hello, is it possible to create notables only based on the number of events triggered? Example: If the correlation search result reached more than 20, I don't want it to trigger an notables, instead... See more...
Hello, is it possible to create notables only based on the number of events triggered? Example: If the correlation search result reached more than 20, I don't want it to trigger an notables, instead generate an email.  is this possible?
Hi, I've been trying for hours and nothing works, so I figure you might help me out. I have the following very long query:       | tstats SUM(requested_cpus) as requested_cpus, SUM(reserved_ram... See more...
Hi, I've been trying for hours and nothing works, so I figure you might help me out. I have the following very long query:       | tstats SUM(requested_cpus) as requested_cpus, SUM(reserved_ram) as reserved_ram, SUM(requested_ram) as requested_ram, SUM(used_ram) as used_ram, SUM(compute_ram_total) as compute_ram_total, count as agg_field_seen WHERE (index=monitor (host="$queuename$") fs_group=$fsgroup$ project=$project$ site=$site$ (slave_resource{} IN ("*")) (NOT slave_resource{} IN ("___VALUE_NONE___")) ( host_state="normal" OR host_state="full" OR host_state="ovrld" OR host_state="sick" OR host_state="susp" OR host_state="base" OR host_state="ready") ) OR (index=ncstat_monitor (host="$queuename$") fs_group=$fsgroup$ project=$project$ site=$site$ (compute_slave_res{} IN ("*")) (NOT compute_slave_res{} IN ("___VALUE_NONE___")) ( host_state="normal" OR host_state="full" OR host_state="ovrld" OR host_state="sick" OR host_state="susp" OR host_state="base" OR host_state="ready") ) BY _time, site, fs_group span=15min | eval query_enabled=1 | eventstats sum(agg_field_seen) AS sum_agg_field_seen BY fs_group | sort 0 - sum_agg_field_seen | streamstats dc(fs_group) AS rank | eval agg_field_ranked=if(rank <= 50 - 1, 'fs_group', "Other") | rename agg_field_ranked as fs_group | stats SUM(reserved_ram) as reserved_ram, SUM(requested_ram) as requested_ram, SUM(used_ram) as used_ram, SUM(compute_ram_total) as compute_ram_total, SUM(requested_cpus) as requested_cpus BY _time, site, fs_group | eval slots=max(reserved_ram/32,requested_cpus) | eval full_fsgroup=site.":".fs_group | timechart span=15min limit=50 partial=false MAX(slots) as Slots BY full_fsgroup       In addition, I have another query from different index:       `p_flow("*",dv)` "***" reg_name=*$reg$* event_type=flow | eval fairshare = coalesce(fairshare, fsgroup) | table fairshare | dedup fairshare        The issue is, I would like to limit the results to match only the output of the fairshare field from the second query. Please notice that fairshare in the first query called fs_group I've been trying many options from different past answers and nothing seem to be working. Please assist and be blessed forever.
Hi, I dont have any exp with Kafka but we need to send data from Kafka to Splunk. I am reading documentation but dont understand what we need to do. Ok, HEC on Splunk and configure HEC options. Bu... See more...
Hi, I dont have any exp with Kafka but we need to send data from Kafka to Splunk. I am reading documentation but dont understand what we need to do. Ok, HEC on Splunk and configure HEC options. But what with .jar (or how to build Docker). Do we need to "build .jar" and than put it into Kafka plugin folder or is there some way that we build connector that will connect to Kafka (like for example Redis or RabbitMQ, like middleware). Tnx. any help is appreciated.
Hi Splunk Folk, I've spent most of the morning trying to find this with no luck, I've seen some similar posts but none of the solutions work for me.  Why is the "Current Size" greater than the "Max ... See more...
Hi Splunk Folk, I've spent most of the morning trying to find this with no luck, I've seen some similar posts but none of the solutions work for me.  Why is the "Current Size" greater than the "Max Size" for several indexes which reside on a cluster? Here is an example of the indexes.conf file that my master is pushing out for a 100Gb Max Size index. [index_name] homePath = $SPLUNK_DB\index_name\db coldPath = $SPLUNK_DB\index_name\colddb thawedPath = $SPLUNK_DB\index_name\thaweddb repFactor = auto enableDataIntegrityControl = 0 enableTsidxReduction = 0 maxTotalDataSizeMB = 102400 bucketRebuildMemoryHint = 0 compressRawdata = 1 enableOnlineBucketRepair = 1 minHotIdleSecsBeforeForceRoll = 0 suspendHotRollByDeleteQuery = 0 syncMeta = 1 disabled = 0   I haven't tried changing the maxTotalDataSizeMB value and pushing out new configs yet because I wanted to understand why it's doing this in the first place?   Any ideas?
Hi, My query: index=ph_windows_sec sourcetype=XmlWinEventLog (EventCode=630 OR EventCode=4726 OR EventCode=624 OR EventCode=4720) earliest=-14d | stats values(TargetUserName) as TargetUserName ,val... See more...
Hi, My query: index=ph_windows_sec sourcetype=XmlWinEventLog (EventCode=630 OR EventCode=4726 OR EventCode=624 OR EventCode=4720) earliest=-14d | stats values(TargetUserName) as TargetUserName ,values(signature) as Message, count by Caller_User_Name | eval status=case(EventCode=630, \"Account%20Deletion\", EventCode=4726, \"Account%20Deletion\", EventCode=624, \"Account%20Creation\", EventCode=4720, \"Account%20Creation\") | transaction user startswith=status=\"Account%20Creation\" endswith=status=\"Account%20Deletion\" maxevents=2 | where duration < 3600 When I add "Stats values", the query isn't found any hit. When I delete "Stats values", the query returns with hits. What is wrong with my query? Thanks!  
Dear all, I'm trying to retrieve some log metadata and associate them to all my events. Exemple:  When my application starts, I'll get a few lines with what I'm calling metadata here (version, env... See more...
Dear all, I'm trying to retrieve some log metadata and associate them to all my events. Exemple:  When my application starts, I'll get a few lines with what I'm calling metadata here (version, env, user, ... ) and then, the raw logs start. 2021-05-10T09:53:21.122+02:00|Criticity=INFO|Message=Version:3.4;Env=production 2021-05-10T09:53:46.474+02:00|Criticity=INFO|Message=blabla 2021-05-10T09:53:46.474+02:00|Criticity=DEBUG|Message=blabla2 2021-05-10T09:53:46.478+02:00|Criticity=DEBUG|Message=blabla3 I want this Version and Env to be usable as a field in all my events. Like if each event looked something like this from a sub-query search standpoint: 2021-05-10T09:53:46.474+02:00|Criticity=INFO|Message=blabla|Version:3.4;Env=production 2021-05-10T09:53:46.474+02:00|Criticity=DEBUG|Message=blabla2|Version:3.4;Env=production 2021-05-10T09:53:46.478+02:00|Criticity=DEBUG|Message=blabla3|Version:3.4;Env=production What would be the solution to end up with such usage? Context: The application I want to monitor is a heavy client, the users can choose the environnement to connect to from their desktop, and I capture the logs via a UniversalForwarder to Splunk Cloud. I don't have much control on the log format, I've to go with this one. Thanks in advance for your help
Hi, I'm receiving FortiGate event via FortiAnalyser and I need to set the Host to the name of the device that created the event which is contained in the event message as devname. May 10 10:44:30 1... See more...
Hi, I'm receiving FortiGate event via FortiAnalyser and I need to set the Host to the name of the device that created the event which is contained in the event message as devname. May 10 10:44:30 10.90.223.5 date=2021-05-10 time=11:44:30 devname="test" devid="test" logid="0000000013" type="traffic" subtype="forward" level="notice" vd="root" eventtime=1620643470882685981 tz="+0100" srcip= srcport=62408 srcintf="" srcintfrole="undefined" dstip= dstport=53 dstintf="port2" dstintfrole="wan" sessionid=81948384 proto=17 action="accept" policyid=23 policytype="policy" poluuid="cbd3e37e-5bf1-51eb-f2ad-0a49a47d1d1d" service="Domain Services UDP" dstcountry="Reserved" srccountry="Reserved" trandisp="noop" duration=180 sentbyte=76 rcvdbyte=194 sentpkt=1 rcvdpkt=1 vpn="" vpntype="ipsec-dynamic" appcat="unscanned"   I have started to build the transform below but it doesn't work. [Set-Host-By-Devname] REGEX = ([^.+?devname=\"[A-Z0-9]+") FORMAT = host::$1 DEST_KEY = MetaData:Host  
Hi, I am trying to ping servers from the app present in Heavy Forwarder, but we have more than 5000 servers so it is not feasible to add the inputs manually. If any new servers are added to the envi... See more...
Hi, I am trying to ping servers from the app present in Heavy Forwarder, but we have more than 5000 servers so it is not feasible to add the inputs manually. If any new servers are added to the environment then it should be automatically added as a ping input. But in our environment we are not saving any data on Heavy Forwarders. So can anyone suggest a way to automatically add the inputs to the app? @LukeMurphey any help would be appreciated. Thanks
Hello, Need help in coloring the bar chart of a field based on the other field value? Below is my sample search     index=nextgen sourcetype=lighthouse_json datasource=LH step="http://www.google.... See more...
Hello, Need help in coloring the bar chart of a field based on the other field value? Below is my sample search     index=nextgen sourcetype=lighthouse_json datasource=LH step="http://www.google.com/" | stats values(speedindex) as speedindex by _time     Sample log for the above search and visualization for the above query... I wanted to color the bars of the speed index based on the score? For example in the above screenshot speedindex_score is 89, which means the page is performing good and I wanted to get that bar in green color. If  speedindex_score is in between 0-49 - Red color speedindex_score is in between 50-89 - yellow color speedindex_score is in between 90-100 - Green color Can someone please help me with how to plot in the above mentioned manner?