All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi all, I try to group events using transaction.  Since there are multiple endswith condition, i tried following to match either one of the 3 string patterns but unable to match: ... | transactio... See more...
Hi all, I try to group events using transaction.  Since there are multiple endswith condition, i tried following to match either one of the 3 string patterns but unable to match: ... | transaction client endswith=eval(match(_raw, "string1|string2|string3")) Would anyone please help? Thanks a lot. Best Rgds  
Hello, Syntax:     index=security sourcetype=EDR:* | eval dest=coalesce(ip,ipaddress) | stats values(sourcetype) values(cvs) values(warning) values(operating_system) values(ID) by dest   ... See more...
Hello, Syntax:     index=security sourcetype=EDR:* | eval dest=coalesce(ip,ipaddress) | stats values(sourcetype) values(cvs) values(warning) values(operating_system) values(ID) by dest     Problem: sourcetype contains two sourcetypes: EDR:Security EDS:Assets In Security I have fields ip, cvs, warning In Assets I have fields ipaddress, operating_system, ID I use syntax above and I am happy as I see results from both sourcetypes.  Now I need to see only results that have cvs above 7.  The problem is that whenever I user cvs>7 or  | search cvs>7 or |where cvs>7 I can see results from EDR:Security (so from sourcetype that I am looking for condition csv>7).  How can I see still results from both sourcetypes but only from hosts which have cvs score above 7?
is there any possibility to split the value from the message field, like teamName, ID as a different field.
Hi Splunkers I have 4 indexers in my cluster environment and 8 TB for hot volume, 4 TB for summaries, and 160 TB for cold buckets, totally. I don't have a freeze path.  This is my indexes.conf as... See more...
Hi Splunkers I have 4 indexers in my cluster environment and 8 TB for hot volume, 4 TB for summaries, and 160 TB for cold buckets, totally. I don't have a freeze path.  This is my indexes.conf as well.     [volume:HOT] path = /Splunk-Storage/hot maxVolumeDataSizeMB = 1900000 # ~2 TB [volume:COLD] path = /Splunk-Storage/cold maxVolumeDataSizeMB = 40000000 # ~40 TB [volume:_splunk_summaries] path = /Splunk-Storage/splunk_summaries maxVolumeDataSizeMB = 950000 # ~1 TB       Now, I want to add 4 indexers but I can't increase my volumes due to some limitations. So, I have to split my total space between 8 indexers instead of 4 indexers. What is your suggestion to avoid possible data loss and minimum down time?
Hello, I have a below splunk query which gives me response time value extracted from its response. index=my_index openshift_cluster="cluster009" sourcetype=openshift_logs openshift_namespace=my_n... See more...
Hello, I have a below splunk query which gives me response time value extracted from its response. index=my_index openshift_cluster="cluster009" sourcetype=openshift_logs openshift_namespace=my_ns openshift_container_name=contaner | search "POST /payment/orders/v1 HTTP" sample response message: "message": { "input": "192.168.56.10 - - [03/Apr/2023:08:26:18 +0000] \"GET /payment/orders/v1/1b8ee28e-a42b-4ef0-9063-6f36302aeac2-ntt HTTP/1.1\" 200 9907 8080 13 ms" } To the above query, If I add the pre-extracted variables - processDuration, serviceURL - I get the average/response90 values which I want | stats avg(processDuration) as average perc90(processDuration) as response90 by serviceURL | eval average=round(average,2),response90=round(response90,2) Now, I have 4 different search text: CreateOrder: search "POST /payment/orders/v1 HTTP" getOrder: search "GET /payment/orders/*-* HTTP" processOrder: search "POST /payment/orders/*/process HTTP" validate: search "POST /payment/orders/*/validate HTTP" I want to build a query using these 4 types of search and get the response time details as below: Operations average response90 CreateOrder 250 380 getOrder 240 330 processOrder 210 321 validate 260 365    
I have a event like this: I want to list a table following CLIENT_LIST. For example: ip_vpn            name_vpn       time_vpn 10.10.0.20    louis_tran        Tue Apr 4 9:21:41 2023 10.0.... See more...
I have a event like this: I want to list a table following CLIENT_LIST. For example: ip_vpn            name_vpn       time_vpn 10.10.0.20    louis_tran        Tue Apr 4 9:21:41 2023 10.0.0.21       wanki_trinh    Tue Apr 4 9:15:02 2023 --------------------- Anyone have any idea
Hello everyone. I have a problem installing "Python for Scientific Computing. Every time I try to install the App I get this error Error during app install: failed to extract app from C:\Program Fi... See more...
Hello everyone. I have a problem installing "Python for Scientific Computing. Every time I try to install the App I get this error Error during app install: failed to extract app from C:\Program Files\Splunk\var\run\4f044214eacc4962.tar.gz to C:\Program Files\Splunk\var\run\splunk\bundle_tmp\6f09d351dce26128: The system cannot find the path specified. I have tried creating the folder manually with no success. also tried installing it through the terminal but the same problem occurs. Does anyone have an idea to solve this problem?
 I have abruptly been unable to access Splunk ES with the error message as "Fetch failed: authentication/current-context" We have Splunk on-prem and there has been no any recent changes in Splunk E... See more...
 I have abruptly been unable to access Splunk ES with the error message as "Fetch failed: authentication/current-context" We have Splunk on-prem and there has been no any recent changes in Splunk ES. All other apps are working fine and Splunk ES is working fine with other team members. What could be the issue and if there is any resolution for the same. Thanks.
Added app to splunk. https://splunkbase.splunk.com/app/4592 Splunk Enterprise version is 8.2.9.   Can you fix the error? "Unable to initialize modular input \"bitbucket_repositories\" define... See more...
Added app to splunk. https://splunkbase.splunk.com/app/4592 Splunk Enterprise version is 8.2.9.   Can you fix the error? "Unable to initialize modular input \"bitbucket_repositories\" defined in the app \"TA-bitbucket\": Introspecting scheme=bitbucket_repositories: script running failed (PID 24175 exited with code 1)..",
By default, only labels are displayed on pie chart when using top command. Is there any way to add count and percent to pie chart?
I have below configurations in transforms and props config files to fetch only events containing keyword 'splunking' in the log files. But it seems to be not working .   transforms.conf [keepOn... See more...
I have below configurations in transforms and props config files to fetch only events containing keyword 'splunking' in the log files. But it seems to be not working .   transforms.conf [keepOnly10Lines] REGEX=splunking FORMAT=indexQueue DEST_KEY=queue props.conf [test-GP] TRANSFORMS-set = keepOnly10Lines   inputs.conf [monitor:///opt/splunk/data/osheanTest/darsha_test*.log] index = main sourcetype = test-GP disabled = 0 whitelist = .log$ move_policy = sinkhole crcSalt = <source>      Below are the logs: 05-12-2019 22:07:53.705 +0100 INFO splunkkkkkkkkkk - iueyrh8923f 2f82hob3f 208fhob 23f802ofb 2f8uo2bj f28ufb 2f892uobf2803fbuo j2f028bof j20fi oj 05-12-2019 22:07:53.705 +0100 INFO splunkingkkkkkkk - be27tf829fb 2u79fg2uibf 20fb 2f972gbu f20fb f0h2if 20f8bo f2hinfp 2fip 2f802fio2nf l 05-12-2019 22:07:53.705 +0100 INFO splunkkkkkkkkkk - uewhwf8iew cewuwbkj cobvjl ced08 jlwcuwojl vcew0vbjl wevcowejbl vwpeubvjl wvujwlevhwpivnwepviblj m 05-12-2019 22:07:53.705 +0100 INFO splunkkkkkkkkkk - 73ye9ubf 2fy92ou3bfj 2fhuo2bj f2yfdou2bj f208fhoub2jf02obfjl20fhinkf2pihbfl f9ip2knf c-92pjfpi2k 2-hpifn;k 05-12-2019 22:07:53.705 +0100 INFO splunkingkkkkkkk - ye08ru280fihn2 f20hfoib 2f0h2bi f2-9fpi2n f2fhpi2nk f2-9phifnk; 2fh2pibk f2fhpin;k 05-12-2019 22:07:53.705 +0100 INFO splunkkkkkkkkkk - ifhone 2n0ifnlk2 mfn082oihldj ovuce2h083do2bj fc028ifh3f8oih2lfdn2fob2jf80hi2pblj m9-2ufjpn;k f082hif 2 05-12-2019 22:07:53.705 +0100 INFO splunkingkkkkkkkkk - 8yd802hoifn 2fu2bj f28foub 2f9i2uk f2fobj 2fb 292fpin2 f29jpfin;k 2fpi2nf 0iphnfl 2fiplk 2fhipbl 05-12-2019 22:07:53.705 +0100 INFO splunkingkkkkkkk - d80dfh2inf280fyhoin2lf082hfoibnl 3df032u2inf2083yfh2n3f082y3fhn2 n2803f2ifn 2f820bf 280f2ob f280foi 2jl82u0ib 05-12-2019 22:07:53.705 +0100 INFO splunkkkkkkkkkk - e3ue832oin 23ifh23oilkf 2380ifb 23f802obuf 29-fhpi2 f290fpi 2f-2ipk 05-12-2019 22:07:53.705 +0100 INFO splunkingkkkkkkk - 3hd982yo802in f230f92hin3 f23fhpib2 3f230hpifn23fpi2b l 05-12-2019 22:07:53.705 +0100 INFO splunkingkkkkkkkkk - wyud8230foidn 2f02hiofn2fhpi2bf2hipfb2fpi2b3 f23f2-93fpi2n;k3 f2-fhpi2n3k; f2-39hpifnk; m 05-12-2019 22:07:53.705 +0100 INFO splunkkkkkkkkkk - feature="IOWait" color=green due_to_stanza="feature:iowait" node_type=feature node_path=splunkkkkkkkkkkd.resource_usage.iowait 05-12-2019 22:07:53.705 +0100 INFO splunkingkkkkkkkkk - vgavsgcavs chcyvashc msacyhasvc asasycvas casycvajs casyicxh darshan 05-12-2019 22:07:53.705 +0100 INFO splunkkkkkkkkkk - 10520523 3412 0520523 120523 120534gtey54y darshan 05-12-2019 21:37:53.702 +0100 INFO splunkkkkkkkkkk - 2052052ftrfquxutfxyiyqigx yhghck scxixb qcyicgkhqwmn cqwicykh darshan.       Please help me in figuring out what is hindering splunk from applying transforms and props configuartions.
hi all I have a data set like this: _time, duration, category XXX, 0.145,A XXY, 0.177,B XXZ, 0.178, A XXX, XXY,XXZ are _time i plot a graph like timechart avg(duration) by category and ... See more...
hi all I have a data set like this: _time, duration, category XXX, 0.145,A XXY, 0.177,B XXZ, 0.178, A XXX, XXY,XXZ are _time i plot a graph like timechart avg(duration) by category and it shows two lines perfectly but I want to plot a graph over time of the differences between the two averages (two categories). How to do that?
Hi, Splunkers, I have a condition drilldown, when I only had one condition <condition match="$t_DrillDown$ = &quot;*&quot">,  it works,    but when I added another condition len($Gucid_token$) =... See more...
Hi, Splunkers, I have a condition drilldown, when I only had one condition <condition match="$t_DrillDown$ = &quot;*&quot">,  it works,    but when I added another condition len($Gucid_token$) = 20, and click some field, which passes $Gucid_token$ as input,   my two dashboard included here are not opened, instead  a basic splunk search is opened.   <condition match="$t_DrillDown$ = &quot;*&quot; AND len($Gucid_token$) = 20"> <link target="_blank"> <![CDATA[ /app/optum_gvp/pivr_search?form.i_callGUID=$click.value2$&form.i_time1.earliest=$row.StartDTM_epoch$&form.i_time1.latest=$row.EndDTM_epoch$ ]]> </link> <link target="_blank"> <![CDATA[ /app/optum_gvp/guciduuidsid_search_applied_rules_with_ors_log_kvp?form.Gucid_token_with2handlers=$click.value2$&form.field2.earliest=$row.StartDTM_epoch$&form.field2.latest=$row.EndDTM_epoch$ ]]> </link> </condition>   why added another condition  len($Gucid_token$) = 20 caused drilldown not open two dashboards , but open a basic splunk query?   thx.   Kevin
Hi, This is a follow-up to my previous question. Now, I am trying to add a second drop-down. The values populated are correct, but my events table is not updating. Is there some errors in my codes?... See more...
Hi, This is a follow-up to my previous question. Now, I am trying to add a second drop-down. The values populated are correct, but my events table is not updating. Is there some errors in my codes?     <panel> <title>Error Log</title> <input type="dropdown" token="ProfileLog" searchWhenChanged="true"> <label>Module</label> <fieldForLabel>ESPACE_NAME</fieldForLabel> <fieldForValue>ESPACE_NAME</fieldForValue> <search base="baseSearch"> <query>| stats count by ESPACE_NAME</query> </search> <choice value="*">All</choice> <default>*</default> <initialValue>*</initialValue> </input> <input type="dropdown" token="MessageLog" searchWhenChanged="true"> <label>Error Message</label> <search base="baseSearch"> <query>| search ESPACE_NAME="$ProfileLog$" | stats count by MESSAGE</query> </search> <default>*</default> <fieldForLabel>MESSAGE</fieldForLabel> <fieldForValue>MESSAGE</fieldForValue> <choice value="*">All</choice> <initialValue>*</initialValue> </input> <event> <search base="baseSearch"> <query>| search ESPACE_NAME="$ProfileLog$"</query> </search> <option name="list.drilldown">none</option> <option name="refresh.display">progressbar</option> </event> </panel>       Thanks!
I have a Splunk cloud implementation where the client side there is a Heavy Forwarder type server that collects that forwards logs to Splunk Cloud. In that Heavy Forwarder there is also the DBConne... See more...
I have a Splunk cloud implementation where the client side there is a Heavy Forwarder type server that collects that forwards logs to Splunk Cloud. In that Heavy Forwarder there is also the DBConnect plugin to get the data from a database. My question is if for some reason the hostname of the database changes and I put the hostname of the new database and the respective port as it is a new database for Splunk it would download it completely? the configuration was made in "Rising" mode so that it only discards the new logs, but as for the add-on it would be a new database, then it would download the complete database? If it is a database with logs more than 5 years old, is there any method to bring them into splunk since it will obviously exceed the daily license?    
  I want to extract fields from events similar to following event, through props.conf using regualr expression. The challange is that the event is XML formatted but it has Json data embeded in ... See more...
  I want to extract fields from events similar to following event, through props.conf using regualr expression. The challange is that the event is XML formatted but it has Json data embeded in it.     I am trying to find solution similar to the solution stated in this post:https://community.splunk.com/t5/Getting-Data-In/Sed-command-Large-XML-values-in-JSON-events-makes-replacement/m-p/370664     This is how my events look like:(example event)       <25>1 2023-04-03T13:12:32.0Z AH-1249259-001 EPOEvents - EventFwd [agentInfo@3401 tenantId="1" bpsId="1" tenantGUID="{00000000-0000-0000-0000-000000000000}" tenantNodePath="1\2"] <?xml version="1.0" encoding="utf-8"?> <EPOEvent><MachineInfo><AgentGUID>{8396cab6-ec77-11ea-2747-3448edc44e42}</AgentGUID><MachineName>KB89A2AEBECBD</MachineName> <RawMACAddress>12345</RawMACAddress> <IPAddress>12345</IPAddress> <AgentVersion>5.7.5.504</AgentVersion> <OSName>Windows 10</OSName> <TimeZoneBias>300</TimeZoneBias> <UserName>chill</UserName> </MachineInfo> <SoftwareInfo ProductName="BeyondTrust Privilege Management" ProductVersion="23.1.0.259" ProductFamily="Secure"> <Event> <EventID>202256</EventID> <Severity>0</Severity> <GMTTime>2023-04-03T13:10:36</GMTTime> <LocalTime>2023-04-03T08:10:36</LocalTime> <CustomFields target="AvectoReportingEvents">      <Data>{&quot;Header&quot; : {&quot;AgentVersion&quot; : &quot;23.1.259.0&quot;, &quot;Code&quot; : &quot;106&quot;, &quot;EndpointType&quot; : &quot;MicrosoftWindows&quot;, &quot;HostDomainName&quot;: &quot;my.com&quot;, &quot;RuleScriptStatus&quot;: &quot;&quot;, &quot;AuthMethods&quot;: [], &quot;IdPAuthenticationUserName&quot;: &quot;&quot;, &quot;ConfigurationID&quot;: &quot;be94d460-c4cb-4827-8f3b-5572727c54e6&quot;, &quot;UACTriggered&quot;: 0 }}     </Data> <EventId>106</EventId> <SentTime>2023-04-03T13:10:36Z</SentTime> <Version>23.1.0.259</Version></CustomFields></Event></SoftwareInfo></EPOEvent>  
Hello,  In a Log4J scan the following directory was flagged for containing comprimised log4j.jar files. The files are contained in the below directory although the host being used is that of depr... See more...
Hello,  In a Log4J scan the following directory was flagged for containing comprimised log4j.jar files. The files are contained in the below directory although the host being used is that of depricated servers in our infrastructure. I have migrated all of our splunk enterprise components to newer architecture and there exists  I am curious if there is a correct way to remove the old hosts under searchpeer including all the <old_host>.* files. If unused can I just deleted them? /opt/splunk/var/run/searchpeers/<old_host>-1641917646  (no longer exists) /opt/splunk/var/run/searchpeers/<current_host>-1641917646   (active host in use) File that is being caught by scan,  /opt/splunk/var/run/searchpeers/<old_host->41917646/apps/splunk_archiver/java-bin/jars/vendors/spark/3.0.1/lib/log4j-core-2.13.3.jar If the directories cant be safely removed i believe other documentation to say just the .jar can be removed safely.
I have a use case in which our user sends us logs in batches in which each individual logs have their own timestamp(when they would have occurred), which we then log individually to the splunk using ... See more...
I have a use case in which our user sends us logs in batches in which each individual logs have their own timestamp(when they would have occurred), which we then log individually to the splunk using serilog.sinks.splunk in .net 6. What we are trying to do is replace the automatically generated logging time in splunk with the original timestamp that we received from our user. Is this possible in splunk ? and if so then how.
Hello SPlunkers ,   I am not seeing data for a particular index after restart 3/29/23 5:00:34.647 PM 03-30-2023 00:00:34.647 +0000 INFO HotDBManager [7073 indexerPipe] - closing hot mgr for ... See more...
Hello SPlunkers ,   I am not seeing data for a particular index after restart 3/29/23 5:00:34.647 PM 03-30-2023 00:00:34.647 +0000 INFO HotDBManager [7073 indexerPipe] - closing hot mgr for idx=abc component = HotDBManagerhost = abc index = _internalsource = /opt/splunk/var/log/splunk/splunkd.logsourcetype = splunkd 3/29/23 5:00:34.618 PM 03-30-2023 00:00:34.618 +0000 INFO IndexWriter [7073 indexerPipe] - idx=abc Handling shutdown or signal, reason=1 component = IndexWriterhost = abc index = _internals ource = /opt/splunk/var/log/splunk/splunkd.logsourcetype = splunkd 3/29/23 5:00:34.601 PM 03-30-2023 00:00:34.601 +0000 INFO IndexWriter [7073 indexerPipe] - idx=abc Sync before shutdown Restarted splunk again and then enabled and disabled the index but still not seeing data...checked source..it is showing data
Need a search that returns the episodeid for all episodes for a given emid and timeframe .. this is available from the 'Share Episode' dropdown for episodes displayed in the Episode Review page, I ne... See more...
Need a search that returns the episodeid for all episodes for a given emid and timeframe .. this is available from the 'Share Episode' dropdown for episodes displayed in the Episode Review page, I need a background search that would return this info.