All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

hi , We could be the  issue regarding the absence of Windows Security Command Line " EventCode=4688 ParentProcessName="C:\\Windows\\System32\\cmd.exe"  events. I have not blacklisted it in any of... See more...
hi , We could be the  issue regarding the absence of Windows Security Command Line " EventCode=4688 ParentProcessName="C:\\Windows\\System32\\cmd.exe"  events. I have not blacklisted it in any of the app. Thanks.
Hi,  How we can list out all the apps inputs.conf blacklisted stanzas in the DS ? Coz I'm seeing the command line events are getting blocked in my Environment.. Thanks
In splunkdev2 we are trying to implement SAML authentication for that we have configuration with .pem certificate but after applying portal is not working. As per the guidance of team they suggested... See more...
In splunkdev2 we are trying to implement SAML authentication for that we have configuration with .pem certificate but after applying portal is not working. As per the guidance of team they suggested to upgrade splundev server key store to trust the public signing authority Digital Certificate.  How can we do that?
Dears, kindly, support to make paloalto app to work the logs are coming to environment but the app is not working as you can see in pictures  thank you in advance   
Hi,  I was wondering if it was in any way possible to run scheduled browser tests on the splunk observability platform. I need tests to run from 2pm - 5pm however I can't seem to find a way to have ... See more...
Hi,  I was wondering if it was in any way possible to run scheduled browser tests on the splunk observability platform. I need tests to run from 2pm - 5pm however I can't seem to find a way to have them run during only those times. 
Hello All, I have a SPL which is scheduled to run each minute for a span of 1 hour. On each execution the search runs for 4 seconds with size of around 400KB. Thus, how does the scheduler and sear... See more...
Hello All, I have a SPL which is scheduled to run each minute for a span of 1 hour. On each execution the search runs for 4 seconds with size of around 400KB. Thus, how does the scheduler and search head work in such scenario at the backend? Does the scheduled SPL keeps scheduler and search head busy for entire 1 hour? Or they are free to run the other SPLs during that span of 1 hour? And can you share any negative implications on Splunk infrastructure due to the above scheduled search? Any information would be very helpful. Thank you Taruchit
Hi All, For the current version of Splunk Cloud, does it allow the integration with Google Authenticator for Multi-Factor Authentication?
  Hi All,   2023-10-25 10:56:46,709 WARN pool-1-thread-1 com.veeva.bpr.batchrecordprint.scheduledTasks - BOM Field Name: BOM_PPMDS_1, value is out of   The above WARN message replace to ERROR me... See more...
  Hi All,   2023-10-25 10:56:46,709 WARN pool-1-thread-1 com.veeva.bpr.batchrecordprint.scheduledTasks - BOM Field Name: BOM_PPMDS_1, value is out of   The above WARN message replace to ERROR message, please find the below ERROR message.   2023-11-06 15:30:48,941 ERROR pool-1-thread-1 com.veeva.brp.batchrecordprint.ScheduledTasks - Unknown error: {errorType=GENERAL,   How to write props .conf and transforms.conf configuration files. please help me.   Regards Vijay .K        
I have following data: 02:00:00 Item=A Result=success 02:00:05 Item=B Result=success 02:05:00 Item=A Result=fail 02:05:05 Item=B Result=success 02:10:00 Item=A Result=fail 02:10:05 Item=B ... See more...
I have following data: 02:00:00 Item=A Result=success 02:00:05 Item=B Result=success 02:05:00 Item=A Result=fail 02:05:05 Item=B Result=success 02:10:00 Item=A Result=fail 02:10:05 Item=B Result=success 02:15:00 Item=A Result=success 02:15:05 Item=B Result=fail 02:20:00 Item=A Result=success 02:20:05 Item=B Result=fail 02:25:00 Item=A Result=success 02:25:05 Item=B Result=success 02:30:00 Item=A Result=success 02:30:05 Item=B Result=success 02:35:00 Item=A Result=success 02:35:05 Item=B Result=success 02:40:00 Item=A Result=success 02:40:05 Item=B Result=fail 02:45:00 Item=A Result=success 02:45:05 Item=B Result=success 02:50:00 Item=A Result=success 02:50:05 Item=B Result=success 02:55:00 Item=A Result=success 02:55:05 Item=B Result=success My desired results: Item StartTime EndTime Duration A       02:05:00    02:15:00 00:10:00 B       02:15:05    02:25:05 00:10:00 B       02:40:05    02:45:05 00:05:00 I had tried transaction and streamstats but got wrong results. Can anybody here help me to solve this problem? Thank you.
Hi! This is a very basic question. First time working with Splunk Enterprise Platform. How do you actually go about switching on the feature to log network traffic coming into an internal network ... See more...
Hi! This is a very basic question. First time working with Splunk Enterprise Platform. How do you actually go about switching on the feature to log network traffic coming into an internal network with a specific IP range? I essentially want for Splunk Enterprise to act as a logger for all traffic that enters the internal network on a certain port, for example. How do I go about it? FYI - I do not want to use the Forwarder or upload log files function.
Hello, I am a beginner with Splunk. I am experimenting with a csv dataset containing the daily average temperature for different cities across the world. As a first step, I would like to see, for a g... See more...
Hello, I am a beginner with Splunk. I am experimenting with a csv dataset containing the daily average temperature for different cities across the world. As a first step, I would like to see, for a given city, the graph for the average temperature over time. However by default, the X axis on the timechart shows the timestamp of the source file, as opposed to the time field contained in each event. As a result, all events show the same date, which is probably the date the dataset was created. How do I use the "Date" field contained in each event, instead of the Timestamp of the dataset file? Thanks,
  Hello, I am forwarding data from an embedded system to an enterprise instance running on a Vm. The logs look like this: acces_monitoring (indexed on splunk, the first empty space means still on... See more...
  Hello, I am forwarding data from an embedded system to an enterprise instance running on a Vm. The logs look like this: acces_monitoring (indexed on splunk, the first empty space means still online):      Access_IP                Access_time                    Logoff_time 1 192.168.200.55 1699814895.000000 2 192.168.200.55 1699814004.000000 1699814060.000000 3 192.168.200.55 1699811754.000000 1699812677.000000 4 192.168.200.55 1699808364.000000 1699809475.000000 5 192.168.200.55 1699806635.000000 1699806681.000000 6 192.168.200.55 1699791222.000000 1699806628.000000 7 192.168.200.55 1699791125.000000 1699791127.000000 8 192.168.200.55 1699724540.000000 1699724541.000000 9 192.168.200.55 1699724390.000000 1699724474.000000   command_monitoring:       Access_IP              exec_time                     executed_command 1 192.168.200.55 1699813121.000000 cd ~ 2 192.168.200.55 1699813116.000000 cd /opt 3 192.168.200.55 1699813110.000000 prova3 4 192.168.200.55 1699811813.000000 cat sshd_config 5 192.168.200.55 1699811807.000000 cd /etc/ssh 6 192.168.200.55 1699811801.000000 cd etc 7 192.168.200.55 1699811793.000000 cd 8 192.168.200.55 1699811788.000000 ls 9 192.168.200.55 1699811783.000000 e che riconosce le sessioni diverse 10 192.168.200.55 1699811776.000000 spero funziona 11 192.168.200.55 1699809221.000000 cat command_log.log 12 192.168.200.55 1699809210.000000 ./custom_shell.sh 13 192.168.200.55 1699808594.000000 CD /MEDIA 14 192.168.200.55 1699808587.000000 cd /medi 15 192.168.200.55 1699808584.000000 omar when i try to join the two by running:   index=main source="/media/ssd1/ip_command_log/command_log.log" | eval exec_time=strptime(exec_time, "%a %b %d %H:%M:%S %Y") | rename ip_execut as Access_IP | table Access_IP, exec_time, executed_command | join type=left Access_IP [ search index=main source="/media/ssd1/splunk_wtmp_output.txt" | dedup Access_time | eval Access_time=strptime(Access_time, "%a %b %d %H:%M:%S %Y") | eval Logoff_time=if(Logoff_time="still logged in", now(), strptime(Logoff_time, "%a %b %d %H:%M:%S %Y")) | table Access_IP, Access_time, Logoff_time ] | eval session_active = if(exec_time >= Access_time AND exec_time <= coalesce(Logoff_time, now()), "true", "false") | where session_active="true" | table Access_IP, Access_time, Logoff_time, exec_time, executed_command   it does not join over every session but only the last one so the one started at 1699814895.000000 and it will not identify any of the commands ran on the embedded system in the correct session.What could be the catch?   Thanks in advance!
I have 2 string which need to be searched in splunk both string having different index and different source type.one string is "published sourcing plan " and another string is "published transfer ord... See more...
I have 2 string which need to be searched in splunk both string having different index and different source type.one string is "published sourcing plan " and another string is "published transfer order" .I need to get "published transfer order" log from the splunk.if it's not available after 5 min of getting "published sourcing plan "log in the splunk.i need to count it or need to retrieve some details like salesorderid from "published sourcing order" log .how to prepare sea rch query in splunk.incase none of the log available in the splunk for "transfer order published",I need to capture the things
Hello, I would like to properly parse rspamd logs that look like this (2 lines sample):   2023-11-12 16:06:22 #28191(rspamd_proxy) <8eca26>; proxy; rspamd_task_write_log: action="no action", diges... See more...
Hello, I would like to properly parse rspamd logs that look like this (2 lines sample):   2023-11-12 16:06:22 #28191(rspamd_proxy) <8eca26>; proxy; rspamd_task_write_log: action="no action", digest="107a69c58d90a38bb0214546cbe78b52", dns_req="79", filename="undef", forced_action="undef", ip="A.B.C.D", is_spam="F", len="98123", subject="foobar", head_from=""dude" <info@example.com>", head_to=""other" <other@example.net>", head_date="Sun, 12 Nov 2023 07:08:57 +0000", head_ua="nil", mid="<B3.B6.48980.82A70556@gg.mta2vrest.cc.prd.sparkpost>", qid="0E4231619A", scores="-0.71/15.00", settings_id="undef", symbols_scores_params="BAYES_HAM(-3.00){100.00%;},RBL_MAILSPIKE_VERYBAD(1.50){A.B.C.D:from;},RWL_AMI_LASTHOP(-1.00){A.B.C.D:from;},URI_COUNT_ODD(1.00){105;},FORGED_SENDER(0.30){info@example.com;bounce-1699772934217.160136898825325270136859@example.com;},MANY_INVISIBLE_PARTS(0.30){4;},ZERO_FONT(0.20){2;},BAD_REP_POLICIES(0.10){},MIME_GOOD(-0.10){multipart/alternative;text/plain;},HAS_LIST_UNSUB(-0.01){},ARC_NA(0.00){},ASN(0.00){asn:23528, ipnet:A.B.C.0/20, country:US;},DKIM_TRACE(0.00){example.com:+;},DMARC_POLICY_ALLOW(0.00){example.com;none;},FROM_HAS_DN(0.00){},FROM_NEQ_ENVFROM(0.00){info@example.com;bounce-1699772934217.160136898825325270136859@example.com;},HAS_REPLYTO(0.00){support@example.com;},MIME_TRACE(0.00){0:+;1:+;2:~;},RCPT_COUNT_ONE(0.00){1;},RCVD_COUNT_ZERO(0.00){0;},REDIRECTOR_URL(0.00){twitter.com;},REPLYTO_DOM_NEQ_FROM_DOM(0.00){},R_DKIM_ALLOW(0.00){example.com:s=scph0618;},R_SPF_ALLOW(0.00){+exists:A.B.C.D._spf.sparkpostmail.com;},TO_DN_ALL(0.00){},TO_MATCH_ENVRCPT_ALL(0.00){}", time_real="1605.197ms", user="undef" 2023-11-12 16:02:04 #28191(rspamd_proxy) <4a3599>; proxy; rspamd_task_write_log: action="no action", digest="5151f8aa4eaebc5877c7308fed4ea21e", dns_req="19", filename="undef", forced_action="undef", ip="E.F.G.H", is_spam="F", len="109529", subject="Re: barfoo", head_from="other me <other@example.net>", head_to="someone <someone@exmaple.fr>", head_date="Sun, 12 Nov 2023 16:02:03 +0100", head_ua="Apple Mail (2.3731.700.6)", mid="<3425840B-B955-4647-AB4D-163FC54BE820@example.net>", qid="163A215DB3", scores="-4.09/15.00", settings_id="undef", symbols_scores_params="BAYES_HAM(-2.99){99.99%;},ARC_ALLOW(-1.00){example.net:s=openarc-20230616:i=1;},MIME_GOOD(-0.10){multipart/mixed;text/plain;},APPLE_MAILER_COMMON(0.00){},ASN(0.00){asn:12322, ipnet:E.F.0.0/11, country:FR;},FREEMAIL_CC(0.00){example.com;},FREEMAIL_ENVRCPT(0.00){example.fr;example.com;},FREEMAIL_TO(0.00){example.fr;},FROM_EQ_ENVFROM(0.00){},FROM_HAS_DN(0.00){},MID_RHS_MATCH_FROM(0.00){},MIME_TRACE(0.00){0:+;1:+;2:~;},RCPT_COUNT_TWO(0.00){2;},RCVD_COUNT_ZERO(0.00){0;},TO_DN_ALL(0.00){},TO_MATCH_ENVRCPT_ALL(0.00){}", time_real="428.021ms", user="me"     The field I need to split is symbols_scores_params. I’ve used this:   sourcetype=rspamd user=* | makemv tokenizer="([^,]+),?" symbols_scores_params | mvexpand symbols_scores_params | rex field=symbols_scores_params "(?<name>[A-Z0-9_]+)\((?<score>-?[.0-9]+)\){(?<options>[^{}]+)}" | eval {name}_score=score, {name}_options=options     It works great, proper fields are created (eg. BAYES_HAM_score, BAYES_HAM_options, etc.), but a single event is turned into a pack of 17 to 35 events. Is there a way to dedup those events and to keep every new fields extracted from symbols_scores_params ?
How to get default fields like host app,servie after using eval. after using eval, i am not able to fetch any default fields. please advise.
I have the query to find the response code and count vs time (in 1 minute time interval) as below.   index=sample_index path=*/sample_path* | bucket _time span=1m | stats count by _time respons... See more...
I have the query to find the response code and count vs time (in 1 minute time interval) as below.   index=sample_index path=*/sample_path* | bucket _time span=1m | stats count by _time responseCode   The result shows the response code and count vs time for each minute. But I just need the events in those 1 minutes which have 403 response code along with other response codes and skip which doesn't have 403.  Suppose during time1, if there are only events with response code 200, I don't need that in my result. But during time2, if there are events with response code 200 and 403, I need that in the result as time, response code, count. 
Hi,   we have a two site 6 indexer cluster 3 per site and we are upgrading the CPU and each site will be offline for 3 hours per site, do I need to do anything on Splunk or do I need to run ./Splun... See more...
Hi,   we have a two site 6 indexer cluster 3 per site and we are upgrading the CPU and each site will be offline for 3 hours per site, do I need to do anything on Splunk or do I need to run ./Splunk offline on 3 indexers at a time before they are shut down?   Thanks
Hi there: I have the following query: source=accountCalc type=acct.change msg="consumed" event_id="*" process_id="*" posted_ timestamp =”*” msg_ timestamp =”*” | eval e1_t=strptime(posted_ timesta... See more...
Hi there: I have the following query: source=accountCalc type=acct.change msg="consumed" event_id="*" process_id="*" posted_ timestamp =”*” msg_ timestamp =”*” | eval e1_t=strptime(posted_ timestamp, "%FT%T") | eval e2_t=strptime(msg_ timestamp, "%FT%T") | eval lag_in_seconds=e1_t-e2_t | eval r2_posted_timestamp=posted_time | table event_id process_id msg_timestamp r2_posted_timestamp lag_in_seconds The above query can return multiple events with the same event_id & process_id with different posted_ timestamp I need to only return the one with the earliest/oldest posted_time(one of the fields in the event). How can i change the above query to accomplish this? Thanks!
Hello Splunkers,  I have an issue with the UF file monitoring where the input is not being monitored/ not forwarding the events to splunk. I do not have access to the server to run the btool. [moni... See more...
Hello Splunkers,  I have an issue with the UF file monitoring where the input is not being monitored/ not forwarding the events to splunk. I do not have access to the server to run the btool. [monitor:///opt/BA/forceAutomation/workuser.ABD/output/event_circuit.ABD.*] sourcetype = banana _meta=Appid::APP-1234 DataClassification::Unclassified index = test disabled = 0 crcSalt = <SOURCE> ignoreOlderThan = 7d The host(s) are sending _internal logs to Splunk, Here is the info I see in splunkd.log no errors, I tried the wildcard (*) in the monitoring stanza at the end after /output dir however it didn't work TailingProcessor [ MainTailingThread] - Parsing configuration stanza: monitor :///opt/BA/forceAutomation/workuser.ABD/output/event_circuit.ABD.* Actual log file  -rw-r--r--1 automat autouser 6184 Oct 8 00:00 event_circuit.ABD.11082023      
Good day everyone Someone here will have had experience obtaining values from a JSON.. Currently I have _raws in JSON format from which I try to obtain a table that shows in a single row the values ... See more...
Good day everyone Someone here will have had experience obtaining values from a JSON.. Currently I have _raws in JSON format from which I try to obtain a table that shows in a single row the values of the object that has the array with the most data. I better explain myself with the following example: This is the JSON code that comes in each event:       { "investigationStatus":"New", "status":1, "priorityScore":38, "workbenchName":"PSEXEC Execution By Process", "workbenchId":"WB-18286-20231106-00005", "severity":"low", "caseId":null, "detail":{ "schemaVersion":"1.14", "alertProvider":"SAE", "description":"PSEXEC execution to start remote process", "impactScope":[ { "entityValue":{ "name":"SERVER01", }, "relatedIndicators":[ 2 ] }, { "entityValue":{ "name":"SERVER02", }, "relatedIndicators":[ 2, 3 ] }, { "entityValue":{ "name":"SERVER03", }, "relatedIndicators":[ 1, 2, 3, 4 ] }, { "entityValue":{ "name":"SERVER04", }, "relatedIndicators":[ 1 ] } ] } }       And this is the table I'm trying to get: workbenchId workbenchName  severity  name_host "WB-18286-20231106-00005" "PSEXEC Execution By Process" "low" "SERVER03"   If you can see, the values of the 1st level of the JSON are found, and then there is the host_name SERVER03, since this has the largest number of values in the "relatedIndicators" array (from 1 to 4), the rest of the servers do not because they have smaller amount in the array. Maybe any idea how I could achieve it? I tried with json_extract but didn't succeed