All Topics

Top

All Topics

Hello colleagues When running the command (/opt/splunk/bin/splunk reload deploy-server -class Class_Name -debug) There was no such error before. Literally today got out with what it can be connect... See more...
Hello colleagues When running the command (/opt/splunk/bin/splunk reload deploy-server -class Class_Name -debug) There was no such error before. Literally today got out with what it can be connected? Will setenv SPLUNK_CLI_DEBUG to "v". In check_and_set_splunk_os_user(): In env found *no* SPLUNK_OS_USER var. WARNING (cli_common) btool returned something in stderr: 'Will exec (detach=no): USER=root USERNAME=root PATH=/opt/splunk/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/opt/splunk/bin PWD=/opt/splunk/etc/deployment-apps HOSTNAME=splunk-deployer SPLUNK_HOME=/opt/splunk SPLUNK_DB=/opt/splunkDBcold/DBdefault SPLUNK_SERVER_NAME=Splunkd SPLUNK_WEB_NAME=splunkweb PYTHONPATH=/opt/splunk/lib/python2.7/site-packages NODE_PATH=/opt/splunk/lib/node_modules LD_LIBRARY_PATH=/opt/splunk/lib LDAPCONF=/opt/splunk/etc/openldap/ldap.conf /opt/splunk/bin/splunkd btool web list
I have encountered an issue with the foreach command on mv-fields. When I execute my search, Splunk says: "Error in 'eval' command: The expression is malformed. An unexpected character is reached at... See more...
I have encountered an issue with the foreach command on mv-fields. When I execute my search, Splunk says: "Error in 'eval' command: The expression is malformed. An unexpected character is reached at '<<ITEM>>'. " SPL to reproduce:     | makeresults | eval mvfield=mvappend("1", "2", "3"), total=0 | foreach mode=multivalue mvfield [eval total = total + <<ITEM>>] | table mvfield, total     Note: this query is directly pulled from the examples for the foreach command. Note2: the argument "mode" is not syntax-highlighted (would expect green)
What is the difference between now() and _time?
Hello, we have a following integration in place: GuardDuty -> EventBridge (no transformation)-> Firehose (no transformation) -> Splunk (cloud) HEC sourcetype = aws:cloudwatch:guardduty HEC so... See more...
Hello, we have a following integration in place: GuardDuty -> EventBridge (no transformation)-> Firehose (no transformation) -> Splunk (cloud) HEC sourcetype = aws:cloudwatch:guardduty HEC source override = aws_cloudwatchevents_guardduty Despite source override sometimes we see events with aws.guardduty source, and in those cases the message format is different (thus search outputs no results so we do not get an alert). Single source events start with: {"schemaVersion":"2.0","accountId":"<some_account>","region":"<aws_region>","partition":...} Double source events start with (additional header/metadata preceding schemaVersion): {"version":"0","id":"<some_id>","detail-type":"GuardDuty Finding","source":"aws.guardduty","account":"<some_account>","time":"2022-09-07T11:55:02Z","region":"<aws_region>","resources":[],"detail":{"schemaVersion":"2.0","accountId":"<some_account>","region":"<aws_region>","partition":...} AWS has been excluded as the source of the issue. Any ideas on how to have only one message format (Splunk Support ticket has also been submitted) ?
Hello community, I'm trying to make a simple dashboard but I'm running into a problem with displaying dates. I'm using Splunk Enterprise version 8.2.3. In the data that I have to work, I made ... See more...
Hello community, I'm trying to make a simple dashboard but I'm running into a problem with displaying dates. I'm using Splunk Enterprise version 8.2.3. In the data that I have to work, I made changes in my search to display the time in the correct time zone directly (+2 hour). My research looks like this (it is certainly perfectible but the subject is not there):   Having obtained what I wanted in terms of display, I prepared a dashboard but when I do it with Dashboard Studio, the display of dates does not seem to take some of my modifications:   However, by making the same request in a classic Dashboard, I no longer have the problem:     Is there something specific to do for Dashboard Studio? Am I the only one having the problem? Best regards, Tainted Rajaion
Start_Time=092659 Start_Date=20220908 My requirement is to find the job amount many jobs that runs longer than a day, the above 2 fields relates the job start date and time, 
Hello, I have logs like :  samples={'xxxxxxx' : {'111' :{'222' :{'333'}}}}{'yyyyyyy'{'444'}}{'zzzzzzz'} I need to take all words to one field like ;  my field : 'xxxxxxx','yyyyyyy','zzzzzzz'... See more...
Hello, I have logs like :  samples={'xxxxxxx' : {'111' :{'222' :{'333'}}}}{'yyyyyyy'{'444'}}{'zzzzzzz'} I need to take all words to one field like ;  my field : 'xxxxxxx','yyyyyyy','zzzzzzz' Thank you,
We're looking over our environment for potential safety flaws. One question that came up is whether an admin-user is available by default on Splunk Universal Forwarders (UF). I'm not thinking about t... See more...
We're looking over our environment for potential safety flaws. One question that came up is whether an admin-user is available by default on Splunk Universal Forwarders (UF). I'm not thinking about the user the UF runs as on the OS, but an admin user on the application layer. Earlier Splunk Enterprise had a default admin password "changeme". Did this also apply for UFs? How can we make sure that there is no admin users on our UFs, or that if there is, that they have proper passwords? 
Hi, Below is the example for raw log: 20220906T23:43:58+03:00#0115dummyvalue.com#01110.111.169.11:51868#01110.45.38.135:8111#0110.527#011-#011-#011200#011200#0117180#011603#011GET /wapi/v3/gat/cu... See more...
Hi, Below is the example for raw log: 20220906T23:43:58+03:00#0115dummyvalue.com#01110.111.169.11:51868#01110.45.38.135:8111#0110.527#011-#011-#011200#011200#0117180#011603#011GET /wapi/v3/gat/cust HTTP/1.1#0115ocilpapgap11.op.okobank.com 20220906T23:43:58+03:00#0115dummyvalue.com#01110.111.169.11:51868#01110.45.38.135:8111#0110.527#011-#011-#011200#011200#0117180#011603#011GET /wapi/v3/gat/cust/apis/info/015-000234567 HTTP/1.1#0115dummyvalue.com 20220906T23:43:58+03:00#0115dummyvalue.com#01110.111.169.11:51868#01110.45.38.135:8111#0110.527#011-#011-#011200#011200#0117180#011603#011GET /wapi/v3/gat/015-0000004847/cust/api HTTP/1.1#0115dummy value.com   From the above raw logs I need to extract the below fields: /wapi/v3/gat/cust /wapi/v3/gat/cust/apis/info/015-000234567 wapi/v3/gat/015-0000004847/cust/api   and it should be extracted and displayed in table/statistics like below format: /wmpapi/v3/gat/cust /wapi/v3/gat/cust/apis/info/{Id} wapi/v3/gat/{Id}/cust/api   Basically in the fields , it should only take alphapets (including that v3) and we should replace digits to {Id} whereever it exist .   Can someone help me on this. Thanks!      
Hi, Is there any way to exclude any events that has more than one value of a field  from end result.    index=X status=1 | rex field=_raw Product\W.(?P<Product>\w*) | rex field=_raw english... See more...
Hi, Is there any way to exclude any events that has more than one value of a field  from end result.    index=X status=1 | rex field=_raw Product\W.(?P<Product>\w*) | rex field=_raw englishName\W.\W(?P<englishName>\w*.*\w)\W | rex field=_raw name\W.\W(?P<name>\w*.*\w) | eval indexTime=_indextime | sort + indexTime | stats list(name) as Customer, list(transaction) as amount, list(Product) as Products, list(currency) as currency, list(englishName) as Item | fieldformat Time = strftime(Time, "%Y-%m-%d %H:%M:%S") |     Data from Event   name: "JohnA",selection=2,Product: "ABC",description=<null>,country='MT',selection=1,Product: "??",description=<null>,country='MT',selection=2,Product: "GOLD",description=<null>,country='MT',     While Having other results where there is only one Product in the events. I would like to exclude any events where there is more than 1 Product. I do not want them in the result. I have tried to find out if there is an option to have rex max_match to only show the ones with max 1 result. Without any luck. Thank you in advanced,
My UF configured with deployment server 8089 and with HF 9997 both are not connecting troubleshoot steps performed:   1. disabled iptables firewall 2. all servers in same subnet there is no n... See more...
My UF configured with deployment server 8089 and with HF 9997 both are not connecting troubleshoot steps performed:   1. disabled iptables firewall 2. all servers in same subnet there is no network firewall issue i believe 3. configured outputs.conf under opt/Splunkforwarder/etc/system/local       [root@gcpas-d-sial02 ~]# tail -f /opt/splunkforwarder/var/log/splunk/splunkd.log 09-08-2022 05:41:22.535 +0000 WARN TcpOutputProc [13177 parsing] - The TCP output processor has paused the data flow. Forwarding to host_dest=10.236.65.143 inside output group HF from host_src=gcpas-d-sial02 has been blocked for blocked_seconds=34900. This can stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data. 09-08-2022 05:41:28.180 +0000 INFO DC:DeploymentClient [13138 PhonehomeThread] - channel=tenantService/handshake Will retry sending handshake message to DS; err=not_connected 09-08-2022 05:41:28.180 +0000 INFO DC:PhonehomeThread [13138 PhonehomeThread] - Attempted handshake 2910 times. Will try to re-subscribe to handshake reply 09-08-2022 05:41:32.336 +0000 WARN AutoLoadBalancedConnectionStrategy [13178 TcpOutEloop] - Raw connection to ip=10.236.65.143:9997 timed out 09-08-2022 05:41:40.180 +0000 INFO DC:DeploymentClient [13138 PhonehomeThread] - channel=tenantService/handshake Will retry sending handshake message to DS; err=not_connected 09-08-2022 05:41:52.180 +0000 INFO DC:DeploymentClient [13138 PhonehomeThread] - channel=tenantService/handshake Will retry sending handshake message to DS; err=not_connected 09-08-2022 05:41:52.247 +0000 WARN AutoLoadBalancedConnectionStrategy [13178 TcpOutEloop] - Raw connection to ip=10.236.65.143:9997 timed out 09-08-2022 05:41:54.623 +0000 WARN HttpPubSubConnection [13137 HttpClientPollingThread_A4B05094-DB53-4495-B31D-853E566CE7E0] - Unable to parse message from PubSubSvr: 09-08-2022 05:41:54.623 +0000 INFO HttpPubSubConnection [13137 HttpClientPollingThread_A4B05094-DB53-4495-B31D-853E566CE7E0] - Could not obtain connection, will retry after=43.540 seconds. 09-08-2022 05:42:04.180 +0000 INFO DC:DeploymentClient [13138 PhonehomeThread] - channel=tenantService/handshake Will retry sending handshake message to DS; err=not_connected ^X09-08-2022 05:42:12.104 +0000 WARN AutoLoadBalancedConnectionStrategy [13178 TcpOutEloop] - Raw connection to ip=10.236.65.143:9997 timed out
The link to Core Consultant Labs course is not working https://www.splunk.com/en_us/training/courses/core-consultant-labs.html Does anyone know how I can do Core Consultant Labs?
Query: |tstats avg(PREFIX(prtime)) as avg(prtime) where index=xdf  source=sdsf TERM(pght=eff) OR TERM(pght=dfrg) OR TERM(pght=iojb) by PREFIX(pght=)  _time span=1m |rename pght= as Pght this que... See more...
Query: |tstats avg(PREFIX(prtime)) as avg(prtime) where index=xdf  source=sdsf TERM(pght=eff) OR TERM(pght=dfrg) OR TERM(pght=iojb) by PREFIX(pght=)  _time span=1m |rename pght= as Pght this query is working fine and getting the results in below format: Pght               _time                                         avg(prtime) eff                 2022-09-07 13:00:00               40.667889889 dfrg             2022-09-07 13:01:00                75.678 iojb              2022-09-07 13:02:00               54.765423   but i want the results  in below format _time                                               eff                                    dfrg                         iojb                2022-09-07 13:00:00             40.667889889           75.678                  80.87656 2022-09-07 13:01:00            34.879                           64.897                    66.8765 2022-09-07 13:02:00           67.989                             89.09876             67.985   please let me know how to do this.
I've run into a scenario where when running stats over an index, its possible I can generate a multivalue field with over 11K unique 38 character guid values but it can be as small as 1 38 character ... See more...
I've run into a scenario where when running stats over an index, its possible I can generate a multivalue field with over 11K unique 38 character guid values but it can be as small as 1 38 character guid.  I have a need to pass those resulting guids as a string into something that has a character length limit of 999 characters. Is there a way whereby incrementing by 38 characters, I can split the field into multiple fields up to 950 characters max per field (which should be 25 guids), dynamically since I wont know how many are going to come in at any given time?
Hi, I am new to splunk, this might have asked and answered but didn't get the answer when i searched it. here is my query: I have a base query, which basically gets the ids field(ex : 1234,3213) fr... See more...
Hi, I am new to splunk, this might have asked and answered but didn't get the answer when i searched it. here is my query: I have a base query, which basically gets the ids field(ex : 1234,3213) from different hosts. i want to get the total number of ids per host.  data: host : ids: price: details xyz:123:$45:example  cds:143:$45:example
Hi, I have 2 searches where the dedup strategy is different, i want to combine the 2 searches but need help with dedup strategy.  Search 1: index=prod sourcetype=error AND "IOS" | dedup notificat... See more...
Hi, I have 2 searches where the dedup strategy is different, i want to combine the 2 searches but need help with dedup strategy.  Search 1: index=prod sourcetype=error AND "IOS" | dedup notification, source  Search 2: index=prod sourcetype=error AND "Android" | dedup _time -> For "IOS" i need to dedup with only notification, source  and for "Android" i need to dedup only with _time index=prod sourcetype=error AND ("IOS" OR "Android") | dedup ?????  
Please share the detail documentation for HttpEventCollectorLogbackAppender where each variable is explained.  Please share some samples for using the HttpEventCollectorLogbackAppender with  <type>... See more...
Please share the detail documentation for HttpEventCollectorLogbackAppender where each variable is explained.  Please share some samples for using the HttpEventCollectorLogbackAppender with  <type>raw</type>
Has anyone been using dashboard studio and find that there is an extreme lag when editing? I sometimes get kicked out via browser freezing and closing and without having saved certain changes. Its to... See more...
Has anyone been using dashboard studio and find that there is an extreme lag when editing? I sometimes get kicked out via browser freezing and closing and without having saved certain changes. Its to the point where I cant waste the time and trust that it will save. Others on my team have experienced the same.
Hi, I want to count the numbers of containers per company. Each data point has a container id, company id, and much more. If I use     stats count("coreData.containerNumber") BY "coreData.co... See more...
Hi, I want to count the numbers of containers per company. Each data point has a container id, company id, and much more. If I use     stats count("coreData.containerNumber") BY "coreData.companyID"      , it somewhat works, but I don't get any returns. also     stats dc("coreData.containerNumber") as count by "coreData.companyID"     does not return results. Is the code correct?
I'm looking to get a difference between both times and create a 3rd field for the results (Properties.actionedDate - _time). My current query is like this   index=* source=* | table Properties.ac... See more...
I'm looking to get a difference between both times and create a 3rd field for the results (Properties.actionedDate - _time). My current query is like this   index=* source=* | table Properties.actionedDate, _time   Here is a screenshot of my current result