All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi guys,    So I have a multi array json input. What I am looking to do is have it split the initial raw data into seperate JSON events.    EX: { "response": { "method": "switchvox.callQueues.... See more...
Hi guys,    So I have a multi array json input. What I am looking to do is have it split the initial raw data into seperate JSON events.    EX: { "response": { "method": "switchvox.callQueues.getCurrentStatus", "result": { "call_queue": { "extension": "2070", "strategy": "ring_all", "queue_members": { "queue_member": [ { "paused_time": "1626", "completed_calls": "1", "paused_since": "2021-01-08 08:59:28", "talking_to_name": "", "login_type": "login", "order": "1", "login_time": "7265", "extension": "4826", "max_talk_time": "835", "time_of_last_call": "2021-01-08 08:26:32", "paused": "1", "account_id": "1503", "missed_calls": "0", "logged_in_status": "logged_in", "fullname": "", "talking_to_number": "", "avg_talk_time": "835" }, { "paused_time": "773", "completed_calls": "1", "paused_since": "", "talking_to_name": "", "login_type": "login", "order": "2", "login_time": "3713", "extension": "4824", "max_talk_time": "183", "time_of_last_call": "2021-01-08 08:13:34", "paused": "0", "account_id": "1587", "missed_calls": "1", "logged_in_status": "logged_in", "fullname": "", "talking_to_number": "", "avg_talk_time": "183" },    to { "paused_time": "1626", "completed_calls": "1", "paused_since": "2021-01-08 08:59:28", "talking_to_name": "", "login_type": "login", "order": "1", "login_time": "7265", "extension": "4826", "max_talk_time": "835", "time_of_last_call": "2021-01-08 08:26:32", "paused": "1", "account_id": "1503", "missed_calls": "0", "logged_in_status": "logged_in", "fullname": "", "talking_to_number": "", "avg_talk_time": "835" } and  { "paused_time": "773", "completed_calls": "1", "paused_since": "", "talking_to_name": "", "login_type": "login", "order": "2", "login_time": "3713", "extension": "4824", "max_talk_time": "183", "time_of_last_call": "2021-01-08 08:13:34", "paused": "0", "account_id": "1587", "missed_calls": "1", "logged_in_status": "logged_in", "fullname": "", "talking_to_number": "", "avg_talk_time": "183" },   I think I need to use a transformation so this happens at indexing, but I am not sure how to do it while making sure Splunk still processes the resultant data and JSON.   
Hey team I wanted to use MTLS authentication to connect to Splunk API endpoint via Java SDK but can't seem to find a way to send in cacerts and other certs in authentication request. Any leads on wh... See more...
Hey team I wanted to use MTLS authentication to connect to Splunk API endpoint via Java SDK but can't seem to find a way to send in cacerts and other certs in authentication request. Any leads on whether it is supported or not?    
Can someone please help here ? I do not want to send the logs to Indexers and i have called only vesxi in my transforms.conf as target server but still Splunk heavyforwarder is sending the logs Inde... See more...
Can someone please help here ? I do not want to send the logs to Indexers and i have called only vesxi in my transforms.conf as target server but still Splunk heavyforwarder is sending the logs Indexer(10.1.1.1:9996 and 10.1.1.2:9997) outputs.conf [tcpout:Indexers] server = 10.1.1.1:9996,10.1.1.2:9997 [tcpout:vesxi] server = 10.20.20.20:519 sendCookedData = false disabled = false Transforms.conf [vmwaresxilogs] REGEX = (logged out|Rejected password for user|Cannot login|logged in as|Accepted user for user|was updated on host|Password was changed for account|Destroy VM called) DEST_KEY = _TCP_ROUTING FORMAT = vesxi  props.conf [vmw-syslog] TRANSFORMS-routing=vmwaresxilogs      
Hello, I generally use the following format to limit specific search results from panel to panel. However, in this case I would like to search for anything containing the string.  [|loadjob $work_c... See more...
Hello, I generally use the following format to limit specific search results from panel to panel. However, in this case I would like to search for anything containing the string.  [|loadjob $work_center_base$ | stats values(work_center) as location]  This would limit the results in my search to only what is in the work_center field. I would like to match anything that contains what is in the location field. I won't this as the locations have more description after the work_center.  ex: work_center               1CAP1 1CAP2 1CAP3 1CAP4 location  (examples to match)  1CAP1-E3 1CAP2-W4 1CAP1-W1  1CAP4-E1 ... I am wondering if I can alter my search to match these and keep all locations that contain partials of work_center. 
Hey everyone, I'm trying to write a search that will show the login events that occurred after the last successful logon event. So far I have this: index="[index name]" sourcetype=WinEventLog "event... See more...
Hey everyone, I'm trying to write a search that will show the login events that occurred after the last successful logon event. So far I have this: index="[index name]" sourcetype=WinEventLog "eventcode=4625" earliest=lastLogon | eval lastLogon= where lastLogin = the time value of the last event from this search: index="[index name]" sourcetype=WinEventLog "eventcode=4624" Failed Logons: index="[index name]" sourcetype=WinEventLog "EventCode=4625" Successful Logons: index="[index name]" sourcetype=WinEventLog "EventCode=4624" Does anyone have an idea of how to evaluate this?
I have a below query where i search two text field and see how many time each occurred and find the difference.  ("SSO Initiated" OR "SSO Completed") | stats count(eval(searchmatch("SSO Initiated"))... See more...
I have a below query where i search two text field and see how many time each occurred and find the difference.  ("SSO Initiated" OR "SSO Completed") | stats count(eval(searchmatch("SSO Initiated"))) as SSO_Initiated count(eval(searchmatch("SSO Completed"))) as SSO_Completed | eval Difference=SSO_Initiated-SSO_Completed I want to create alert if Difference > 20, then mail needs to be sent.  This check should keep happening every 15 minute and check in last 15 minute if Difference > 20, then trigger mail.
Since I have gone through and tuned a lot of the Content in ES, I am looking to see if anyone knows of a Bulk way to add an Adaptive Response (as in send an email) for every Incident Created?  I am a... See more...
Since I have gone through and tuned a lot of the Content in ES, I am looking to see if anyone knows of a Bulk way to add an Adaptive Response (as in send an email) for every Incident Created?  I am at the point now where things are in a good place and I would not be overwhlemed with the amount of emails that would come in from the Incidents BUT now I want to send an email for every one that is created but I don't want to have to go through Content Management and set an adaptive response for EVERY SINGLE one that is enabled. For a little more info, I am using Splunk Cloud and ES. so any back end things I would have to submit a ticket to support (which I am not against doing, just want to make sure that is the route I need to go). TIA
Hello, I have the following error, when perfoming SPL-query: Query: index=_* AND (SMTP OR sendemail OR email) AND (FAIL* OR ERR* OR TIMEOUT OR CANNOT OR REFUSED OR REJECTED) | sendemail to="***@gma... See more...
Hello, I have the following error, when perfoming SPL-query: Query: index=_* AND (SMTP OR sendemail OR email) AND (FAIL* OR ERR* OR TIMEOUT OR CANNOT OR REFUSED OR REJECTED) | sendemail to="***@gmail.com" sendresults=true command="sendemail", [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond while sending mail to: ***@gmail.com   Of course there is no difference which query I am using, sendemail option doesn't work Can someone help me?
Hi all, I'm receiving a lot of splunk-optimize errors in splunkd.log.  ls -l /opt/splunk/var/lib/splunk/audit/db/hot_v1_455 | grep tsidx | wc -l 105 [root@indexer-2 splunk]# /opt/splunk/bin/splun... See more...
Hi all, I'm receiving a lot of splunk-optimize errors in splunkd.log.  ls -l /opt/splunk/var/lib/splunk/audit/db/hot_v1_455 | grep tsidx | wc -l 105 [root@indexer-2 splunk]# /opt/splunk/bin/splunk-optimize -v -d /opt/splunk/var/lib/splunk/audit/db/hot_v1_455 Logging configuration: verbose=1, log2splunk=0 tm= 1610364481 INFO splunk-optimize start: dir=/opt/splunk/var/lib/splunk/audit/db/hot_v1_455 mode=0 isfinal=false max_iteration=2147483647 min_src_count=8 lex_tpb=64 write_level=1 target_size=1572864000 tm= 1610364481 DEBUG source_0=/opt/splunk/var/lib/splunk/audit/db/hot_v1_455/1610362948-1610362948-15554879922328683899.tsidx sz=1080 tm= 1610364481 DEBUG source_1=/opt/splunk/var/lib/splunk/audit/db/hot_v1_455/1610363147-1610363147-16387064457660043774.tsidx sz=1168 tm= 1610364481 DEBUG source_2=/opt/splunk/var/lib/splunk/audit/db/hot_v1_455/1610363267-1610363267-16901658911893982757.tsidx sz=1168 tm= 1610364481 DEBUG source_3=/opt/splunk/var/lib/splunk/audit/db/hot_v1_455/1610363272-1610363272-16923132382574368602.tsidx sz=1176 tm= 1610364481 DEBUG source_4=/opt/splunk/var/lib/splunk/audit/db/hot_v1_455/1610363092-1610363092-16150633440092436534.tsidx sz=1176 tm= 1610364481 DEBUG source_5=/opt/splunk/var/lib/splunk/audit/db/hot_v1_455/1610363178-1610363178-16519774093793408615.tsidx sz=1176 tm= 1610364481 DEBUG source_6=/opt/splunk/var/lib/splunk/audit/db/hot_v1_455/1610363057-1610363057-16000335144082864920.tsidx sz=1176 tm= 1610364481 DEBUG source_7=/opt/splunk/var/lib/splunk/audit/db/hot_v1_455/1610363122-1610363122-16279466559003501729.tsidx sz=1176 tm= 1610364481 DEBUG source_8=/opt/splunk/var/lib/splunk/audit/db/hot_v1_455/1610363058-1610363058-16004583287645382032.tsidx sz=1176 tm= 1610364481 DEBUG source_9=/opt/splunk/var/lib/splunk/audit/db/hot_v1_455/1610363212-1610363212-16665396587756881875.tsidx sz=1176 tm= 1610364481 DEBUG source_10=/opt/splunk/var/lib/splunk/audit/db/hot_v1_455/1610363152-1610363152-16408391753423458229.tsidx sz=1176 tm= 1610364481 DEBUG source_11=/opt/splunk/var/lib/splunk/audit/db/hot_v1_455/1610363242-1610363242-16797814191883895619.tsidx sz=1176 tm= 1610364481 DEBUG source_12=/opt/splunk/var/lib/splunk/audit/db/hot_v1_455/1610363238-1610363238-16777050395829684027.tsidx sz=1176 tm= 1610364481 DEBUG source_13=/opt/splunk/var/lib/splunk/audit/db/hot_v1_455/1610363062-1610363062-16021755056521050397.tsidx sz=1176 tm= 1610364481 DEBUG source_14=/opt/splunk/var/lib/splunk/audit/db/hot_v1_455/1610363298-1610363298-17035162030350365663.tsidx sz=1176 tm= 1610364481 DEBUG source_15=/opt/splunk/var/lib/splunk/audit/db/hot_v1_455/1610363182-1610363182-16537360674905484228.tsidx sz=1176 tm= 1610364481 DEBUG source_16=/opt/splunk/var/lib/splunk/audit/db/hot_v1_455/1610363288-1610363288-16996035449514477598.tsidx sz=1224 tm= 1610364481 DEBUG source_17=/opt/splunk/var/lib/splunk/audit/db/hot_v1_455/1610363221-1610363221-16704811249229177472.tsidx sz=1240 tm= 1610364481 DEBUG source_18=/opt/splunk/var/lib/splunk/audit/db/hot_v1_455/1610363248-1610363248-16826109427840005476.tsidx sz=1248 tm= 1610364481 DEBUG source_19=/opt/splunk/var/lib/splunk/audit/db/hot_v1_455/1610363226-1610363226-16726339824340034748.tsidx sz=1248 tm= 1610364481 DEBUG source_20=/opt/splunk/var/lib/splunk/audit/db/hot_v1_455/1610363103-1610363103-16197833266659683219.tsidx sz=1248 tm= 1610364481 DEBUG source_21=/opt/splunk/var/lib/splunk/audit/db/hot_v1_455/1610363038-1610363038-15927433915388732711.tsidx sz=1248 tm= 1610364481 DEBUG source_22=/opt/splunk/var/lib/splunk/audit/db/hot_v1_455/1610363278-1610363278-16953970350838872388.tsidx sz=1248 tm= 1610364481 DEBUG source_23=/opt/splunk/var/lib/splunk/audit/db/hot_v1_455/1610363048-1610363048-15965922895444683963.tsidx sz=1248 tm= 1610364481 DEBUG source_24=/opt/splunk/var/lib/splunk/audit/db/hot_v1_455/1610363296-1610363296-17026567070646661147.tsidx sz=1248 tm= 1610364481 DEBUG source_25=/opt/splunk/var/lib/splunk/audit/db/hot_v1_455/1610363268-1610363268-16910815734924420355.tsidx sz=1248 tm= 1610364481 DEBUG source_26=/opt/splunk/var/lib/splunk/audit/db/hot_v1_455/1610363218-1610363218-16695936197528204421.tsidx sz=1248 tm= 1610364481 DEBUG source_27=/opt/splunk/var/lib/splunk/audit/db/hot_v1_455/1610363118-1610363118-16266549629544376987.tsidx sz=1248 tm= 1610364481 DEBUG source_28=/opt/splunk/var/lib/splunk/audit/db/hot_v1_455/1610363138-1610363138-16352697168739542857.tsidx sz=1248 tm= 1610364481 DEBUG source_29=/opt/splunk/var/lib/splunk/audit/db/hot_v1_455/1610363084-1610363084-16116233719873926123.tsidx sz=1248 tm= 1610364481 DEBUG source_30=/opt/splunk/var/lib/splunk/audit/db/hot_v1_455/1610363238-1610363238-16782065727925460598.tsidx sz=1248 tm= 1610364481 DEBUG source_31=/opt/splunk/var/lib/splunk/audit/db/hot_v1_455/1610363174-1610363174-16502826878692865729.tsidx sz=1248 tm= 1610364481 ERROR optimize finished: failed, see rc for more details, dir=/opt/splunk/var/lib/splunk/audit/db/hot_v1_455, rc=-29 (unsigned 227), errno=2 tm= 1610364481 INFO exiting splunk-optimize process with rc=-29 (unsigned 227)   I haven't found any documentation what returncode -29 means or how to get more logs for this issue. Could anybody help out? Regards, Andreas
Hi All - I have installed SPlunk master in Linux and universal forwarder in Windows box. And Also opened all Ports .Currently when i do Telnet server ip 9997 ,it is showing timeout in windows box. ... See more...
Hi All - I have installed SPlunk master in Linux and universal forwarder in Windows box. And Also opened all Ports .Currently when i do Telnet server ip 9997 ,it is showing timeout in windows box. In Splunk logs found following warning.Cooked connectioned timeout and some certificate issues.   Can you please provide rootcause or solution  for this issue.    
Restart DB Connect Task Server using Rest or CLI
Hello, I would like to know witch tsidx files was created before I increase the parameter "tsidxWritingLevel" form 1 to 3. is there any way to know the "tsidxWritingLevel" of the tsxidx files in a ... See more...
Hello, I would like to know witch tsidx files was created before I increase the parameter "tsidxWritingLevel" form 1 to 3. is there any way to know the "tsidxWritingLevel" of the tsxidx files in a bucket? Thanks Christian
Hello, I know that there is a limitation in Splunk that shows only limit number of results. is it possible to show all the data without limitation ?   thanks
I am using an ansible script to deploy docker containers with a Splunk Image. I need to install packages to each container by running the following commands inside each container. pip install -U fu... See more...
I am using an ansible script to deploy docker containers with a Splunk Image. I need to install packages to each container by running the following commands inside each container. pip install -U future pip install httplib2 Every time leads to an error, command not found. I even added RUN, before each command but it errors just the same. Has anyone encountered this issue ? Thank you.
Dears, please help. I have log like this  [Information] PosService AddInfo:[5006] - Stop customer And i want to show in table message after ":", currently i am using rex like this but i dont have ... See more...
Dears, please help. I have log like this  [Information] PosService AddInfo:[5006] - Stop customer And i want to show in table message after ":", currently i am using rex like this but i dont have result: | rex field=_raw "PosService\sAddInfo\:(?<addinfo>\w+|)"   Thank you!
Hi Team, Recently we have upgraded our Splunk Cloud to 8.1.2011.1 version. So we got a requirement to create a Token so I have navigated to Settings and clicked Token. By default it was in disabled... See more...
Hi Team, Recently we have upgraded our Splunk Cloud to 8.1.2011.1 version. So we got a requirement to create a Token so I have navigated to Settings and clicked Token. By default it was in disabled state so I have enabled it and when I tried to create Token in GUI. I am getting an error as below" "Token creation failed because: Cannot use tokens for SAML user anandh because neither attribute query requests (AQR) nor scripted auth are supported." I am an admin but still I couldn't able to create the token and moreover the user authentication is happening via SAML and the SAML has been configured in Azure end.   So kindly let me know how to fix it and create a token.
Hi All, I have a requirement to group keys  (key - value pair) having wildcard char like - usermetadata_*  by other unique field value. Here is the query i am using to get all the keys as column:  ... See more...
Hi All, I have a requirement to group keys  (key - value pair) having wildcard char like - usermetadata_*  by other unique field value. Here is the query i am using to get all the keys as column:  index=<index_name> sourcetype=<source_type> splunk_server_group=default | stats dc(usermetadata_*) as * | transpose | rename column as usermetadata | table usermetadata I want the output like this : id                         usermetadata_keys xyz                    usermetadata_type                             usermetadata_eventName                             usermetadata_date pqr                    usermetadata_eventType                            usermetadata_date  
I have a lookup table X which contains list of Servers, my indexer(myserveridx) contains list of server which are up and running. i want to write a querry to get the name of server which are present ... See more...
I have a lookup table X which contains list of Servers, my indexer(myserveridx) contains list of server which are up and running. i want to write a querry to get the name of server which are present in lookup table but not in index. 
Hi, i would like to send Alert from Splunk to specific folder in File Server instead of sending to my Email is there any way? Thanks
An alert was deleted...it no longer shows up under Content Management, but it still shows up under the Incident Review dropdown.  Is there a way to remove it from the Incident Review dropdown?