All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

We can't use xmlkv, customer will fire the index=indexname sourcetype=soucetypename and data should appear with all the fields extracted !!   the events are the combination of Non-XML and XML forma... See more...
We can't use xmlkv, customer will fire the index=indexname sourcetype=soucetypename and data should appear with all the fields extracted !!   the events are the combination of Non-XML and XML format.   From the Non-xml format we have the fields coming in but from the XML formats we dont have any fields.   Finally, we have to automate the extraction using the props.conf in the backend.
Is it possible to extract those xml parts 1st and then use xmlkv command to those?
Was the issue fixed? I'm having the exactly same issue and weeks ago it was working fine. No change was done to the lookup/dataset permissions and the user I'm using to access is the owner of the... See more...
Was the issue fixed? I'm having the exactly same issue and weeks ago it was working fine. No change was done to the lookup/dataset permissions and the user I'm using to access is the owner of the lookup. Could this be related to a splunk certificate being expired? or something else?
I appreciate your response here, but there are many xml tags in the event , as I mentioned in the example : abc xyz   So, you do not know what are the tags coming in the event, so it is dynamic. ... See more...
I appreciate your response here, but there are many xml tags in the event , as I mentioned in the example : abc xyz   So, you do not know what are the tags coming in the event, so it is dynamic.   My Field should be created dynamically with the tag's name and the corresponding value.   ex:- <abc>Wow</abc> field name should not be hardcoded as "abc", it should take "abc" dynamically and the value should be "Wow"
I mean that you should use " in this search | search test_field_name="test_field_name_1"  
Hi @meetmshah  not working as expected. search :- log_type=Passed_Authentications MESSAGE_TEXT="Command Authorization succeeded"  | rex field=CmdSet max_match=0 "CmdAV=(?<Command>[^\s]+)|\sCm... See more...
Hi @meetmshah  not working as expected. search :- log_type=Passed_Authentications MESSAGE_TEXT="Command Authorization succeeded"  | rex field=CmdSet max_match=0 "CmdAV=(?<Command>[^\s]+)|\sCmdArgAV=(?<Command1>[^\s]+)" | makemv delim="," allowempty=t Command1 | table _time,Command,Command1
There doesn't appear to be anything wrong with case statement on its own. However, there are other statements which might affect your result, e.g. dedup. Please can you share some events demonstratin... See more...
There doesn't appear to be anything wrong with case statement on its own. However, there are other statements which might affect your result, e.g. dedup. Please can you share some events demonstrating your issue?
| spath "Action{}" output="Action" | spath "Effect" | mvexpand Action
Hi Guys, I am using multiple keywords to get count of errors from different message.So i am trying case statement to acheive it. index="mulesoft" applicationName="api" environment="*" (message="Con... See more...
Hi Guys, I am using multiple keywords to get count of errors from different message.So i am trying case statement to acheive it. index="mulesoft" applicationName="api" environment="*" (message="Concur Ondemand Started") OR (message="API: START: /v1/fin_Concur") OR (message="*(ERROR): concur import failed for file*") OR (tracePoint="EXCEPTION") | dedup correlationId | eval JobName=case(like('message',"Concur Ondemand Started") OR like('message',"API: START: /v1/fin_Concur%") AND like('tracePoint',"EXCEPTION"),"EXPENSE JOB",like('message',"%(ERROR): concur import failed for file%"),"ACCURAL JOB") | stats count by JobName But i am getting only EXPENSE JOB JobName.But when i split into two query both JobName having result .
Action Name Effect eks allow ecs allow config deny budgets deny
It depends on what your macro expands to, but try this | search NOT `dm_mapped_indexes` Otherwise, please provide more details, e.g. a cut-down, sanitised version of your macro.
Here are  the answers to your questions.... 1. It is the input file for the apps,  all_env_component.csv 2. Yes it works correctly.  data.componentId downtime Ycomp 322.186934 Zcomp ... See more...
Here are  the answers to your questions.... 1. It is the input file for the apps,  all_env_component.csv 2. Yes it works correctly.  data.componentId downtime Ycomp 322.186934 Zcomp 300.23822 Xcomp  645.415504   3.  The fields are,  data.environment.application data.environment.environment data.environment.stack data.componentId   4. This is an availability dashboard.  The initial problemwas aby data.componentId that had 0 downtime would not show in the results, NULL.  This was fixed by adding an input file but then it was showing all the data.componentId and downtime.  The desired result is to just display only the  data.componentId and downtime for the single data.environment.application choosen in the drop down.  Below is the original query that would not display anything with 100% uptime.  index=MINE data.environment.application="app2" data.environment.environment="uat" | eval estack="AW" | fillnull value="uat" estack data.environment.stack | where 'data.environment.stack'=estack | streamstats window=1 current=False global=False values(data.result) AS nextResult BY data.componentId | eval failureStart=if((nextResult="FAILURE" AND 'data.result'="SUCCESS"), "True", "False"), failureEnd=if((nextResult="SUCCESS" AND 'data.result'="FAILURE"), "True", "False") | transaction data.componentId, data.environment.application, data.environment.stack startswith="failureStart=True" endswith="failureEnd=True" maxpause=15m | stats sum(duration) as downtime by data.componentId | addinfo | eval uptime=(info_max_time - info_min_time)-downtime, avail=(uptime/(info_max_time - info_min_time))*100, downMins=round(downtime/60, 0) | rename data.componentId AS Component, avail AS Availability | table Component, Availability  
Please can you give an example of your expected results?
Hi@isoutamo @tscroggins and all Added as a feature request https://ideas.splunk.com/ideas/PLECID-I-786
Below query i am using to get the list of all indexes | eventcount summarize=false index=* | dedup index | fields index  `dm_mapped_indexes` This macros contain the list of indexes.  Now i want to... See more...
Below query i am using to get the list of all indexes | eventcount summarize=false index=* | dedup index | fields index  `dm_mapped_indexes` This macros contain the list of indexes.  Now i want to filter all the indexes from these macros " `dm_mapped_indexes`" and get all the other indexes.
@yuanliu Thank you for your answer. This rephrasing of the problem is great and this solution helps solve my issue using an additional group (i have a `uuid` field for each row that I can use). Many ... See more...
@yuanliu Thank you for your answer. This rephrasing of the problem is great and this solution helps solve my issue using an additional group (i have a `uuid` field for each row that I can use). Many many thanks. @bowesmana Thank you. I do have a `uuid` field for each row, that I did not have in the question, and have gone ahead and used that.
Hello,  This question has probably been asked and answered, but I just can't seem to find a best solution.  So, in the results I want to table the Allow and Deny values. And the second result wo... See more...
Hello,  This question has probably been asked and answered, but I just can't seem to find a best solution.  So, in the results I want to table the Allow and Deny values. And the second result would be the | search Action=eks* only where the Effect is Allow. I have tried till now, but I cannot relate the Allow action, it lists all values. Thanks in Advance.  { "Action": [ "eks:*", "ecs:*" ], "Effect": "Allow"   }, { "Action": [ "config:*", "budgets:*" ], "Effect": "Deny", }
@meshorer  You will need to monitor the ingestd.log on the platform to check for any ingestion failures. It's best to get this into Splunk and it depends on the version you have as to how it gets th... See more...
@meshorer  You will need to monitor the ingestd.log on the platform to check for any ingestion failures. It's best to get this into Splunk and it depends on the version you have as to how it gets there.  In the latest version there is a UF on the box that you can configure in "Forwarder Settings" and this can send all of the SOAR Logs into the splunk_app_soar index: index=splunk_app_soar source=*ingestd.log  You should be able to make some detections there.  In the older versions most data is sent via HEC but DOESN'T include these logs so you will need to put a UF on the server yourself and then load in the splunk_app_for_soar to it and that should grab the Daemon logs and send to splunk in the same way as above. -- Did this fix the issue? If so please mark as a solution. Happy SOARing! --
Hi Team, The above is the event which we have received into the splunk. We have tried to extract the fields such as Timestamp, Jobname, Status using the below query index=app_events_dwh2_de_int _ra... See more...
Hi Team, The above is the event which we have received into the splunk. We have tried to extract the fields such as Timestamp, Jobname, Status using the below query index=app_events_dwh2_de_int _raw=*jobname* | rex max_match=0 "\\\\\\\\\\\\\"jobname\\\\\\\\\\\\\":\s*\\\\\\\\\\\\\"(?<Name>[^\\\]+)" | rex max_match=0 "\\\\\\\\\\\\\"status\\\\\\\\\\\\\":\s*\\\\\\\\\\\\\"(?<State>[^\\\]+)" | rex max_match=0 "Timestamp\\\\\\\\\\\\\": [file://%22(%3f%3cTIME%3e/d+/s*/d+/:/d+/:/d+)]\\\\\\\\\\\\\"(?<TIME>\d+\s*\d+\:\d+\:\d+)" | rex max_match=0 "execution_time_in_seconds\\\\\\\\\\\\\": [file://%22(%3f%3cEXECUTION_TIME%3e[/d/-%5d+)]\\\\\\\\\\\\\"(?<EXECUTION_TIME>[\d\.\-]+)" | table "TIME", "Name", "State", "EXECUTION_TIME" | mvexpand TIME   But the issue we want to extract only those status jobs with status as " ENDED NOTOK". But we are unable to extract them. Also when we use mvexpand command for the table, it is showing multiple duplicate values.   We request you to kindly look into this and help us on this issue.
    Dataframe row : {"_c0":{"0":"{","1":" \"0\": {","2":" \"jobname\": \"A001_GVE_ADHOC_AUDIT\"","3":" \"status\": \"ENDED NOTOK\"","4":" \"Timestamp\": \"20240317 13:25:23\"","5":" }","6":" \"1\":... See more...
    Dataframe row : {"_c0":{"0":"{","1":" \"0\": {","2":" \"jobname\": \"A001_GVE_ADHOC_AUDIT\"","3":" \"status\": \"ENDED NOTOK\"","4":" \"Timestamp\": \"20240317 13:25:23\"","5":" }","6":" \"1\": {","7":" \"jobname\": \"BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TSYS\"","8":" \"status\": \"ENDED NOTOK\"","9":" \"Timestamp\": \"20240317 13:25:23\"","10":" }","11":" \"2\": {","12":" \"jobname\": \"BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TSYS_WEEKLY\"","13":" \"status\": \"ENDED NOTOK\"","14":" \"Timestamp\": \"20240317 13:25:23\"","15":" }","16":" \"3\": {","17":" \"jobname\": \"D001_GVE_SOFT_MATCHING_GDH_CA\"","18":" \"status\": \"ENDED NOTOK\"","19":" \"Timestamp\": \"20240317 13:25:23\"","20":" }","21":" \"4\": {","22":" \"jobname\": \"D100_AKS_CDWH_SQOOP_TRX_ORG\"","23":" \"status\": \"ENDED NOTOK\"","24":" \"Timestamp\": \"20240317 13:25:23\"","25":" }","26":" \"5\": {","27":" \"jobname\": \"D100_AKS_CDWH_SQOOP_TYP_123\"","28":" \"status\": \"ENDED NOTOK\"","29":" \"Timestamp\": \"20240317 13:25:23\"","30":" }","31":" \"6\": {","32":" \"jobname\": \"D100_AKS_CDWH_SQOOP_TYP_45\"","33":" \"status\": \"ENDED OK\"","34":" \"Timestamp\": \"20240317 13:25:23\"","35":" }","36":" \"7\": {","37":" \"jobname\": \"D100_AKS_CDWH_SQOOP_TYP_ENPW\"","38":" \"status\": \"ENDED NOTOK\"","39":" \"Timestamp\": \"20240317 13:25:23\"","40":" }","41":" \"8\": {","42":" \"jobname\": \"D100_AKS_CDWH_SQOOP_TYP_T\"","43":" \"status\": \"ENDED NOTOK\"","44":" \"Timestamp\": \"20240317 13:25:23\"","45":" }","46":" \"9\": {","47":" \"jobname\": \"DREAMPC_CALC_ML_NAMESAPCE\"","48":" \"status\": \"ENDED NOTOK\"","49":" \"Timestamp\": \"20240317 13:25:23\"","50":" }","51":" \"10\": {","52":" \"jobname\": \"DREAMPC_MEMORY_AlERT_SIT\"","53":" \"status\": \"ENDED NOTOK\"","54":" \"Timestamp\": \"20240317 13:25:23\"","55":" }","56":" \"11\": {","57":" \"jobname\": \"DREAM_BDV_NBR_PRE_REQUISITE_TLX_LSP_3RD_PARTY_TRNS\"","58":" \"status\": \"ENDED NOTOK\"","59":" \"Timestamp\": \"20240317 13:25:23\"","60":" }","61":" \"12\": {","62":" \"jobname\": \"DREAM_BDV_NBR_PRE_REQUISITE_TLX_LSP_3RD_PARTY_TRNS_WEEKLY\"","63":" \"status\": \"ENDED NOTOK\"","64":" \"Timestamp\": \"20240317 13:25:23\"","65":" }","66":" \"13\": {","67":" \"jobname\": \"DREAM_BDV_NBR_STG_TLX_LSP_3RD_PARTY_TRNS\"","68":" \"status\": \"ENDED OK\"","69":" \"Timestamp\": \"20240317 13:25:23\"","70":" }","71":" \"14\": {","72":" \"jobname\": \"DREAM_BDV_NBR_STG_TLX_LSP_3RD_PARTY_TRNS_WEEKLY\"","73":" \"status\": \"ENDED OK\"","74":" \"Timestamp\": \"20240317 13:25:23\"","75":" }","76":" \"15\": {","77":" \"jobname\": \"DREAM_BDV_NBR_TLX_LSP_3RD_PARTY_TRNS\"","78":" \"status\": \"ENDED OK\"","79":" \"Timestamp\": \"20240317 13:25:23\"","80":" }","81":" \"16\": {","82":" \"jobname\": \"DREAM_BDV_NBR_TLX_LSP_3RD_PARTY_TRNS_WEEKLY\"","83":" \"status\": \"ENDED OK\"","84":" \"Timestamp\": \"20240317 13:25:23\"","85":" }","86":" \"17\": {","87":" \"jobname\": \"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_GDH\"","88":" \"status\": \"ENDED OK\"","89":" \"Timestamp\": \"20240317 13:25:23\"","90":" }","91":" \"18\": {","92":" \"jobname\": \"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_GDH_WEEKLY\"","93":" \"status\": \"ENDED OK\"","94":" \"Timestamp\": \"20240317 13:25:23\"","95":" }","96":" \"19\": {","97":" \"jobname\": \"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_SAMCONTDEPOT\"","98":" \"status\": \"ENDED NOTOK\"","99":" \"Timestamp\": \"20240317 13:25:23\"","100":" }","101":" \"20\": {","102":" \"jobname\": \"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TLXLSP_TRXN\"","103":" \"status\": \"ENDED NOTOK\"","104":" \"Timestamp\": \"20240317 13:25:23\"","105":" }","106":" \"21\": {","107":" \"jobname\": \"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TRADEABR\"","108":" \"status\": \"ENDED OK\"","109":" \"Timestamp\": \"20240317 13:25:23\"","110":" }","111":" \"22\": {","112":" \"jobname\": \"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TRADEABR_WEEKLY\"","113":" \"status\": \"ENDED OK\"","114":" \"Timestamp\": \"20240317 13:25:23\"","115":" }","116":" \"23\": {","117":" \"jobname\": \"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TRADESON\"","118":" \"status\": \"ENDED NOTOK\"","119":" \"Timestamp\": \"20240317 13:25:23\"","120":" }","121":" \"24\": {","122":" \"jobname\": \"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TRADESON_WEEKLY\"","123":" \"status\": \"ENDED OK\"","124":" \"Timestamp\": \"20240317 13:25:23\"","125":" }","126":" \"25\": {","127":" \"jobname\": \"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_ZCI\"","128":" \"status\": \"ENDED NOTOK\"","129":" \"Timestamp\": \"20240317 13:25:23\"","130":" }","131":" \"26\": {","132":" \"jobname\": \"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_ZCI_WEEKLY\"","133":" \"status\": \"ENDED NOTOK\"","134":" \"Timestamp\": \"20240317 13:25:23\"","135":" }"