All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

It depends on what your macro expands to, but try this | search NOT `dm_mapped_indexes` Otherwise, please provide more details, e.g. a cut-down, sanitised version of your macro.
Here are  the answers to your questions.... 1. It is the input file for the apps,  all_env_component.csv 2. Yes it works correctly.  data.componentId downtime Ycomp 322.186934 Zcomp ... See more...
Here are  the answers to your questions.... 1. It is the input file for the apps,  all_env_component.csv 2. Yes it works correctly.  data.componentId downtime Ycomp 322.186934 Zcomp 300.23822 Xcomp  645.415504   3.  The fields are,  data.environment.application data.environment.environment data.environment.stack data.componentId   4. This is an availability dashboard.  The initial problemwas aby data.componentId that had 0 downtime would not show in the results, NULL.  This was fixed by adding an input file but then it was showing all the data.componentId and downtime.  The desired result is to just display only the  data.componentId and downtime for the single data.environment.application choosen in the drop down.  Below is the original query that would not display anything with 100% uptime.  index=MINE data.environment.application="app2" data.environment.environment="uat" | eval estack="AW" | fillnull value="uat" estack data.environment.stack | where 'data.environment.stack'=estack | streamstats window=1 current=False global=False values(data.result) AS nextResult BY data.componentId | eval failureStart=if((nextResult="FAILURE" AND 'data.result'="SUCCESS"), "True", "False"), failureEnd=if((nextResult="SUCCESS" AND 'data.result'="FAILURE"), "True", "False") | transaction data.componentId, data.environment.application, data.environment.stack startswith="failureStart=True" endswith="failureEnd=True" maxpause=15m | stats sum(duration) as downtime by data.componentId | addinfo | eval uptime=(info_max_time - info_min_time)-downtime, avail=(uptime/(info_max_time - info_min_time))*100, downMins=round(downtime/60, 0) | rename data.componentId AS Component, avail AS Availability | table Component, Availability  
Please can you give an example of your expected results?
Hi@isoutamo @tscroggins and all Added as a feature request https://ideas.splunk.com/ideas/PLECID-I-786
Below query i am using to get the list of all indexes | eventcount summarize=false index=* | dedup index | fields index  `dm_mapped_indexes` This macros contain the list of indexes.  Now i want to... See more...
Below query i am using to get the list of all indexes | eventcount summarize=false index=* | dedup index | fields index  `dm_mapped_indexes` This macros contain the list of indexes.  Now i want to filter all the indexes from these macros " `dm_mapped_indexes`" and get all the other indexes.
@yuanliu Thank you for your answer. This rephrasing of the problem is great and this solution helps solve my issue using an additional group (i have a `uuid` field for each row that I can use). Many ... See more...
@yuanliu Thank you for your answer. This rephrasing of the problem is great and this solution helps solve my issue using an additional group (i have a `uuid` field for each row that I can use). Many many thanks. @bowesmana Thank you. I do have a `uuid` field for each row, that I did not have in the question, and have gone ahead and used that.
Hello,  This question has probably been asked and answered, but I just can't seem to find a best solution.  So, in the results I want to table the Allow and Deny values. And the second result wo... See more...
Hello,  This question has probably been asked and answered, but I just can't seem to find a best solution.  So, in the results I want to table the Allow and Deny values. And the second result would be the | search Action=eks* only where the Effect is Allow. I have tried till now, but I cannot relate the Allow action, it lists all values. Thanks in Advance.  { "Action": [ "eks:*", "ecs:*" ], "Effect": "Allow"   }, { "Action": [ "config:*", "budgets:*" ], "Effect": "Deny", }
@meshorer  You will need to monitor the ingestd.log on the platform to check for any ingestion failures. It's best to get this into Splunk and it depends on the version you have as to how it gets th... See more...
@meshorer  You will need to monitor the ingestd.log on the platform to check for any ingestion failures. It's best to get this into Splunk and it depends on the version you have as to how it gets there.  In the latest version there is a UF on the box that you can configure in "Forwarder Settings" and this can send all of the SOAR Logs into the splunk_app_soar index: index=splunk_app_soar source=*ingestd.log  You should be able to make some detections there.  In the older versions most data is sent via HEC but DOESN'T include these logs so you will need to put a UF on the server yourself and then load in the splunk_app_for_soar to it and that should grab the Daemon logs and send to splunk in the same way as above. -- Did this fix the issue? If so please mark as a solution. Happy SOARing! --
Hi Team, The above is the event which we have received into the splunk. We have tried to extract the fields such as Timestamp, Jobname, Status using the below query index=app_events_dwh2_de_int _ra... See more...
Hi Team, The above is the event which we have received into the splunk. We have tried to extract the fields such as Timestamp, Jobname, Status using the below query index=app_events_dwh2_de_int _raw=*jobname* | rex max_match=0 "\\\\\\\\\\\\\"jobname\\\\\\\\\\\\\":\s*\\\\\\\\\\\\\"(?<Name>[^\\\]+)" | rex max_match=0 "\\\\\\\\\\\\\"status\\\\\\\\\\\\\":\s*\\\\\\\\\\\\\"(?<State>[^\\\]+)" | rex max_match=0 "Timestamp\\\\\\\\\\\\\": [file://%22(%3f%3cTIME%3e/d+/s*/d+/:/d+/:/d+)]\\\\\\\\\\\\\"(?<TIME>\d+\s*\d+\:\d+\:\d+)" | rex max_match=0 "execution_time_in_seconds\\\\\\\\\\\\\": [file://%22(%3f%3cEXECUTION_TIME%3e[/d/-%5d+)]\\\\\\\\\\\\\"(?<EXECUTION_TIME>[\d\.\-]+)" | table "TIME", "Name", "State", "EXECUTION_TIME" | mvexpand TIME   But the issue we want to extract only those status jobs with status as " ENDED NOTOK". But we are unable to extract them. Also when we use mvexpand command for the table, it is showing multiple duplicate values.   We request you to kindly look into this and help us on this issue.
    Dataframe row : {"_c0":{"0":"{","1":" \"0\": {","2":" \"jobname\": \"A001_GVE_ADHOC_AUDIT\"","3":" \"status\": \"ENDED NOTOK\"","4":" \"Timestamp\": \"20240317 13:25:23\"","5":" }","6":" \"1\":... See more...
    Dataframe row : {"_c0":{"0":"{","1":" \"0\": {","2":" \"jobname\": \"A001_GVE_ADHOC_AUDIT\"","3":" \"status\": \"ENDED NOTOK\"","4":" \"Timestamp\": \"20240317 13:25:23\"","5":" }","6":" \"1\": {","7":" \"jobname\": \"BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TSYS\"","8":" \"status\": \"ENDED NOTOK\"","9":" \"Timestamp\": \"20240317 13:25:23\"","10":" }","11":" \"2\": {","12":" \"jobname\": \"BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TSYS_WEEKLY\"","13":" \"status\": \"ENDED NOTOK\"","14":" \"Timestamp\": \"20240317 13:25:23\"","15":" }","16":" \"3\": {","17":" \"jobname\": \"D001_GVE_SOFT_MATCHING_GDH_CA\"","18":" \"status\": \"ENDED NOTOK\"","19":" \"Timestamp\": \"20240317 13:25:23\"","20":" }","21":" \"4\": {","22":" \"jobname\": \"D100_AKS_CDWH_SQOOP_TRX_ORG\"","23":" \"status\": \"ENDED NOTOK\"","24":" \"Timestamp\": \"20240317 13:25:23\"","25":" }","26":" \"5\": {","27":" \"jobname\": \"D100_AKS_CDWH_SQOOP_TYP_123\"","28":" \"status\": \"ENDED NOTOK\"","29":" \"Timestamp\": \"20240317 13:25:23\"","30":" }","31":" \"6\": {","32":" \"jobname\": \"D100_AKS_CDWH_SQOOP_TYP_45\"","33":" \"status\": \"ENDED OK\"","34":" \"Timestamp\": \"20240317 13:25:23\"","35":" }","36":" \"7\": {","37":" \"jobname\": \"D100_AKS_CDWH_SQOOP_TYP_ENPW\"","38":" \"status\": \"ENDED NOTOK\"","39":" \"Timestamp\": \"20240317 13:25:23\"","40":" }","41":" \"8\": {","42":" \"jobname\": \"D100_AKS_CDWH_SQOOP_TYP_T\"","43":" \"status\": \"ENDED NOTOK\"","44":" \"Timestamp\": \"20240317 13:25:23\"","45":" }","46":" \"9\": {","47":" \"jobname\": \"DREAMPC_CALC_ML_NAMESAPCE\"","48":" \"status\": \"ENDED NOTOK\"","49":" \"Timestamp\": \"20240317 13:25:23\"","50":" }","51":" \"10\": {","52":" \"jobname\": \"DREAMPC_MEMORY_AlERT_SIT\"","53":" \"status\": \"ENDED NOTOK\"","54":" \"Timestamp\": \"20240317 13:25:23\"","55":" }","56":" \"11\": {","57":" \"jobname\": \"DREAM_BDV_NBR_PRE_REQUISITE_TLX_LSP_3RD_PARTY_TRNS\"","58":" \"status\": \"ENDED NOTOK\"","59":" \"Timestamp\": \"20240317 13:25:23\"","60":" }","61":" \"12\": {","62":" \"jobname\": \"DREAM_BDV_NBR_PRE_REQUISITE_TLX_LSP_3RD_PARTY_TRNS_WEEKLY\"","63":" \"status\": \"ENDED NOTOK\"","64":" \"Timestamp\": \"20240317 13:25:23\"","65":" }","66":" \"13\": {","67":" \"jobname\": \"DREAM_BDV_NBR_STG_TLX_LSP_3RD_PARTY_TRNS\"","68":" \"status\": \"ENDED OK\"","69":" \"Timestamp\": \"20240317 13:25:23\"","70":" }","71":" \"14\": {","72":" \"jobname\": \"DREAM_BDV_NBR_STG_TLX_LSP_3RD_PARTY_TRNS_WEEKLY\"","73":" \"status\": \"ENDED OK\"","74":" \"Timestamp\": \"20240317 13:25:23\"","75":" }","76":" \"15\": {","77":" \"jobname\": \"DREAM_BDV_NBR_TLX_LSP_3RD_PARTY_TRNS\"","78":" \"status\": \"ENDED OK\"","79":" \"Timestamp\": \"20240317 13:25:23\"","80":" }","81":" \"16\": {","82":" \"jobname\": \"DREAM_BDV_NBR_TLX_LSP_3RD_PARTY_TRNS_WEEKLY\"","83":" \"status\": \"ENDED OK\"","84":" \"Timestamp\": \"20240317 13:25:23\"","85":" }","86":" \"17\": {","87":" \"jobname\": \"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_GDH\"","88":" \"status\": \"ENDED OK\"","89":" \"Timestamp\": \"20240317 13:25:23\"","90":" }","91":" \"18\": {","92":" \"jobname\": \"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_GDH_WEEKLY\"","93":" \"status\": \"ENDED OK\"","94":" \"Timestamp\": \"20240317 13:25:23\"","95":" }","96":" \"19\": {","97":" \"jobname\": \"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_SAMCONTDEPOT\"","98":" \"status\": \"ENDED NOTOK\"","99":" \"Timestamp\": \"20240317 13:25:23\"","100":" }","101":" \"20\": {","102":" \"jobname\": \"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TLXLSP_TRXN\"","103":" \"status\": \"ENDED NOTOK\"","104":" \"Timestamp\": \"20240317 13:25:23\"","105":" }","106":" \"21\": {","107":" \"jobname\": \"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TRADEABR\"","108":" \"status\": \"ENDED OK\"","109":" \"Timestamp\": \"20240317 13:25:23\"","110":" }","111":" \"22\": {","112":" \"jobname\": \"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TRADEABR_WEEKLY\"","113":" \"status\": \"ENDED OK\"","114":" \"Timestamp\": \"20240317 13:25:23\"","115":" }","116":" \"23\": {","117":" \"jobname\": \"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TRADESON\"","118":" \"status\": \"ENDED NOTOK\"","119":" \"Timestamp\": \"20240317 13:25:23\"","120":" }","121":" \"24\": {","122":" \"jobname\": \"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TRADESON_WEEKLY\"","123":" \"status\": \"ENDED OK\"","124":" \"Timestamp\": \"20240317 13:25:23\"","125":" }","126":" \"25\": {","127":" \"jobname\": \"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_ZCI\"","128":" \"status\": \"ENDED NOTOK\"","129":" \"Timestamp\": \"20240317 13:25:23\"","130":" }","131":" \"26\": {","132":" \"jobname\": \"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_ZCI_WEEKLY\"","133":" \"status\": \"ENDED NOTOK\"","134":" \"Timestamp\": \"20240317 13:25:23\"","135":" }"  
We are planning to integrate our SAP BTP Fiori app with Cisco AppDynamics and need some guidance. Could you please provide us with information on the following: The initial setup required in AppDy... See more...
We are planning to integrate our SAP BTP Fiori app with Cisco AppDynamics and need some guidance. Could you please provide us with information on the following: The initial setup required in AppDynamics for SAP Fiori apps. Any specific agents or SDKs we should use for monitoring Fiori apps. How to set up custom metrics and configure alerts for Fiori apps. Tips on troubleshooting common issues during integration. We appreciate any documentation, resources, or advice you can share to help us ensure a smooth integration.
@bpenny did you ever figure this out? I'm running into the exact same issue. I think the problem is that we're referencing a json path. If I move the timestamp to a top level json field in the event,... See more...
@bpenny did you ever figure this out? I'm running into the exact same issue. I think the problem is that we're referencing a json path. If I move the timestamp to a top level json field in the event, it picks up the timestamp just fine.
Hi according to the suggestion, I have tried out and successfully getting the expected behavior that only "admin" role users can have the options to do "Open in Search"/"Export" and etc... Amend the ... See more...
Hi according to the suggestion, I have tried out and successfully getting the expected behavior that only "admin" role users can have the options to do "Open in Search"/"Export" and etc... Amend the SPL search accordingly to what you desire to achieve <!-- Running searches and gaining role value for current user --> <search> <query>| rest /services/authentication/current-context | search username!=splunk-system-user | fields roles </query> <earliest>-15m</earliest> <latest>now</latest> <done> <eval token="search_visible">if($result.roles$=="admin","true","false")</eval> </done> </search> <!-- Running searches and gaining role value for current user --> <!-- selectively disable only exporting fucntion --> <!-- admin role can export, other roles can't  etc etc..--> <option name="link.exportResults.visible">$search_visible$</option>
Oh, in the future, timestamp issues will have to be resolved by restarting the instance. thank you @gcusello!
Hi @MVK1 , you can create your lookup using the Splunk Lookup Editor App (https://splunkbase.splunk.com/app/1724). Then you have to create your lookup definition [Settings > Lookups > Lookup Defini... See more...
Hi @MVK1 , you can create your lookup using the Splunk Lookup Editor App (https://splunkbase.splunk.com/app/1724). Then you have to create your lookup definition [Settings > Lookups > Lookup Definitions > Create New Definition]; in this job put attention to the other properties, if you don't want that the lookup is case sensitive. Then you can manually populate this lookup using the Lookup Editor or schedule a search to extract the FailureMsgs and store in the lookup using the outputlookup command (https://docs.splunk.com/Documentation/SplunkCloud/latest/SearchReference/Outputlookup). Only one question: in your lookup you whould have product and Feature, but I don't see these information in the sample you shared, so, how would you have these information? Ciao. Giuseppe  
Hi @abi2023 , as @marnall said, you can create different apps and deploy to the UFs using different serverclasses. About data mascking, for my knowledge UFs enter only in the input phase, but the o... See more...
Hi @abi2023 , as @marnall said, you can create different apps and deploy to the UFs using different serverclasses. About data mascking, for my knowledge UFs enter only in the input phase, but the other phases (merge and parsing) are in the first full Splunk instance that data are passing through. In other words, in the Indexers or (when present) in the first Heavy Forwarder, but not in the UFs. If your doubt is that data are sent in clear mode, you can encrypt them between the UFs and the Indexers (or HFs), and then mask them on these other systems. Ciao. Giuseppe
Hi @satyaallaparthi , good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Ka... See more...
Hi @satyaallaparthi , good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi @dongwonn , not all the configuration are reloaded with /debug/refresh. For this reason it's always better to restart Splunk. Ciao. Giuseppe
Hi @selvam_sekar , to identify the common clientids between the threee sourcetypes, you should run something like this: index=a sourcetype IN ("Cos","Ma","Ph") | stats count dc(sourcetype)... See more...
Hi @selvam_sekar , to identify the common clientids between the threee sourcetypes, you should run something like this: index=a sourcetype IN ("Cos","Ma","Ph") | stats count dc(sourcetype) AS sourcetype_count BY clientid | where sourcetype_count=3 | fields - sourcetype_count Ciao. Giuseppe  
I don't know why, but after applying the settings and restarting, the year value was set normally. [host::x.x.x.21] TIME_PREFIX = .... TIME_FORMAT = .... So far, I have reloaded the settings with... See more...
I don't know why, but after applying the settings and restarting, the year value was set normally. [host::x.x.x.21] TIME_PREFIX = .... TIME_FORMAT = .... So far, I have reloaded the settings with /debug/refresh, but this time I tried reloading the settings by restarting Splunk. Although the current operating environment is difficult to operate with just one server, is it possible that there may be cases where new settings are not reloaded?