All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Can u help me on this issue. As character are exceeding i am posting the question 3 times total. I am trying to combine 2 queries to get the result, i am getting the result, but not as expecte... See more...
Can u help me on this issue. As character are exceeding i am posting the question 3 times total. I am trying to combine 2 queries to get the result, i am getting the result, but not as expected. Query1: index=app-map-idx shname=niht_map_* | append [| inputlookup customerdata.csv |addinfo | where _time>=info_min_time AND _time<=info_max_time |fields - info* |eval done=1, shname=device."_name", source=device."_name"] |table _time sys operation web pass fail fvb quantity done shname |eventstats sum(done) as done by shname sys |bucket _time span=w |stats sum(*) as * values(done) as done dc(web) as webb by _time shname sys operation |appendpipe [| search sys="wi lapcdmb" operation="login page" OR operation="app page" OR operation="userpage" OR operation="custpage"] |foreach pass fail fvb quantity [|eval <<FIELD>>=<<FILED>>*0.09] |eval sys="pim"] | eval "pass percentage"=pass/quantity*100 |eval iel=fail/quantity*100 |fillnull fvb value=0 |rename pass as Pass, fail as Fail, sys as Sys, fvb as Fvb, quantity as Quantity, operation as Operation, webb as Webb |lookup application.csv Sys OUTPUT App mtr |search mtr=given |table _time pass percentage iel App Sys Operation Webb Pass Fail Quantity Fvb |sort 0 Sys Output: _time App Sys Operation Webb Pass Fail Quantity Fvb pass percentage iel 2022-11-10 Custom Customer sys sigin app - customer 1 20 0 20 1 100 0 2022-11-10 Cli app CA sigin app - application 2 30 10 40 10 99 0.01 2022-11-10 sigin app SG sigin app - client 2 10 1 11 1 100 0 2022-11-10 user app UA sigin app - app1 1 60 0 60 16 80 0.05 2022-11-10 user app UA sigin app - app2 4 50 0 50 0 20 0.9 2022-11-10 user app UA sigin app - app3 5 100 9 109 0 0 0 2022-11-10 user app UA sigin app - app4 8 21 0 21 0 0 0 2022-11-10 user app UA sigin app - app5 9 23 0 23 0 0 0 2022-11-10 suzk app SA sigin app - app6 2 16 8 24 1 99 0 2022-11-10 qualis app QA sigin app - app7 1 100 10 110 8 10 0 2022-11-10 benz app BA sigin app - app8 2 500 11 511 6 20 0 2022-11-10 client app CA sigin app - app1 4 200 0 200 4 100 0 2022-11-10 client app CA sigin app - app2 1 800 1 801 1 100 0.11 2022-11-10 client app CA sigin app - app3 3 220 1 221 0 90 0.22 2022-11-10 client app CA sigin app - app4 2 1000 11 1011 0 99 0.45 2022-11-10 client app CA sigin app - app5 1 1089 12 2178 9 99 0.56 Query2: index=blrm-app-idx  sourcetype=blrm_appl  "QA: ab"  OR "QA: cd"  OR "QA: ef"  OR "QA: gh" (procedure=POST web="/cate/surface*")  OR  (procedure=GET web="/choc/plain/otp*")  OR  (procedure=POST web="/may/duration")  OR  (procedure=GET web="/year/days")  OR   web="/nam/power/error")  OR  web="/move/cloud")  OR   web="/mont/days NOT [|inputlookup  angryer.csv |rename PI as result |fields result] | eval  type=case((faildigit=978 OR faildigit=435 OR faildigit=543 OR faildigit=987), "error", (faildigit>98763 AND faildigit<98765 )  OR faildigit=123 OR faildigit=456 OR faildigit=789,  fvb,  isnull(faildigit), "pass",   isnotnull(faildigit), "error") |search pipe = "error" |eval operation=case(match(web,  "/cate/surface*"),  sigin app - customer,   procedure=GET AND  web="/choc/plain/otp*") ,  sigin app - application,   procedure=POST AND web="/may/duration",  sigin app - client,   procedure=GET  AND web="/year/days", sigin app - app1,  OR   web="/nam/power/error",  sigin app - app2  OR  web="/move/cloud", sigin app - app3 OR   web="/mont/days, sigin app - app4,  1=1, "unknown") |eval  web = procedure." ".web |eval sys= case (cliapp=ab,  applibase,   cliapp=cd,  cusbase,    cliapp=ef,  efffilm,   cliapp=gh    gohome,  1=1, null()) |bin _time span=1h |stats dc(cust_ip) as effect_apps  by _time sys operation web OUTPUT: _time sys operation web effect_apps 2022-11-10 UA sigin app - app1 1 10 2022-11-10 UA sigin app - app2 2 12 2022-11-10 UA sigin app - app3 2 10 2022-11-10 UA sigin app - app4 1 40 2022-11-10 UA sigin app - app5 4 2 2022-11-10 CA sigin app - app1 5 6 2022-11-10 CA sigin app - app2 8 1 2022-11-10 CA sigin app - app3 9 3 2022-11-10 CA sigin app - app4 2 4 2022-11-10 CA sigin app - app5 1 8 contunuation i am repying to this post...........
Hello, I would like to extract specific values from a log and display it in my Dashboard. For example, the value is: ?QuoteId=CA10118&AgentId=12345&state=MN&Category=RetailSales Is it possibl... See more...
Hello, I would like to extract specific values from a log and display it in my Dashboard. For example, the value is: ?QuoteId=CA10118&AgentId=12345&state=MN&Category=RetailSales Is it possible to extract the word "AgentId=12345" and "state=MN"? "AgentId" and "state" will always be the same. The value that follows will always change. I would like to also display each value in a separate column i.e. Agent ID     State 12345          MN Any help would be appreciated. 
I have lookup contains IP and I want to compare to field from event that contains CIDR. I did lookup definition and add the match type into the advance (CIDR(ip)  my lookup: ip exist  1.1.1.1... See more...
I have lookup contains IP and I want to compare to field from event that contains CIDR. I did lookup definition and add the match type into the advance (CIDR(ip)  my lookup: ip exist  1.1.1.1 "yes" 2.2.2.2 my event: cidr  1.1.1.254/32 my query: |makeresults  | lookup ip ip as cidr output exist   what I did wrong? 
Hi ,, i am looking for the way if i could convert sha1 value to integer with base 16 to do the further arithmetic operations on it using eval.  i tried : ..... |eval sha=sha1("xyz123")|eval dec... See more...
Hi ,, i am looking for the way if i could convert sha1 value to integer with base 16 to do the further arithmetic operations on it using eval.  i tried : ..... |eval sha=sha1("xyz123")|eval dec_number=tonumber(sha,16) but dec_number is returning nothing. Could anyone suggest something ?
Hello Splunkers!    We are using Splunk App for Infrastructure version: 2.2.2 and have upgraded to the latest version: 2.2.5 However, the issue here is that after upgrading, the app got stuck w... See more...
Hello Splunkers!    We are using Splunk App for Infrastructure version: 2.2.2 and have upgraded to the latest version: 2.2.5 However, the issue here is that after upgrading, the app got stuck when migrating at "SAI is currently migrating to a newer version, page will automatically reload when migration finishes" and has been in this state forever:   Has anyone encountered such a thing in the past and solved the issue? 
      index="dummy" url="https://www.dummy.com" status="200 OK" | stats count by id | where count > 10     If I apply this above query for 1 day, I would get this result, for example ... See more...
      index="dummy" url="https://www.dummy.com" status="200 OK" | stats count by id | where count > 10     If I apply this above query for 1 day, I would get this result, for example     id count ABC 50 XYZ 60     This would mean ABC called `https://www.dummy.com` 50 times in 1 day, and XYZ called that 60 times. Now I want to check this for 1 day but with every two hours interval Suppose, ABC called that request 25 times at 12:00 AM, and then calls it 25 times at 3:AM, and XYZ called all the 60 requests between 12 AM and 2 AM I want the output to look like this (time format doesn't matter)     id count time XYZ 60 12:00 AM ABC 25 12:00 AM ABC 25 2:00 AM     Also, If I modify the query like this,  count > 30, instead of count > 10, then it should only show the XYZ field, since ABC has 25 counts for both of them.  How do I modify my query? 
Hi All, I am trying to combine 2 queries to get the result, i am getting the result, but not as expected. Query1: index=app-map-idx   shname=niht_map_* | append [| inputlookup   customerdata.c... See more...
Hi All, I am trying to combine 2 queries to get the result, i am getting the result, but not as expected. Query1: index=app-map-idx   shname=niht_map_* | append [| inputlookup   customerdata.csv |addinfo | where  _time>=info_min_time AND _time<=info_max_time  |fields - info* |eval done=1, shname=device."_name",  source=device."_name"] |table _time   sys operation web pass fail fvb  quantity done shname |eventstats sum(done) as done by shname sys |bucket _time span=w |stats sum(*) as * values(done) as done dc(web) as webb by _time shname  sys operation |appendpipe [| search sys="wi lapcdmb" operation="login page" OR operation="app page" OR operation="userpage" OR operation="custpage"] |foreach pass fail  fvb  quantity [|eval  <<FIELD>>=<<FILED>>*0.09] |eval sys="pim"] | eval  "pass percentage"=pass/quantity*100 |eval iel=fail/quantity*100 |fillnull fvb value=0 |rename  pass as Pass,   fail as Fail,  sys as Sys,  fvb as Fvb,   quantity as Quantity,  operation as Operation,  webb as Webb |lookup application.csv Sys  OUTPUT   App  mtr |search mtr=given |table  _time pass percentage iel App Sys  Operation Webb Pass Fail Quantity Fvb   |sort 0 Sys Output: _time App Sys Operation Webb Pass Fail Quantity Fvb pass percentage iel 2022-11-10 Custom Customer sys sigin app - customer 1 20 0 20 1 100 0 2022-11-10 Cli app CA sigin app - application 2 30 10 40 10 99 0.01 2022-11-10 sigin app SG sigin app - client 2 10 1 11 1 100 0 2022-11-10 user app UA sigin app - app1 1 60 0 60 16 80 0.05 2022-11-10 user app UA sigin app - app2 4 50 0 50 0 20 0.9 2022-11-10 user app UA sigin app - app3 5 100 9 109 0 0 0 2022-11-10 user app UA sigin app - app4 8 21 0 21 0 0 0 2022-11-10 user app UA sigin app - app5 9 23 0 23 0 0 0 2022-11-10 suzk app SA sigin app - app6 2 16 8 24 1 99 0 2022-11-10 qualis app QA sigin app - app7 1 100 10 110 8 10 0 2022-11-10 benz app BA sigin app - app8 2 500 11 511 6 20 0 2022-11-10 client app CA sigin app - app1 4 200 0 200 4 100 0 2022-11-10 client app CA sigin app - app2 1 800 1 801 1 100 0.11 2022-11-10 client app CA sigin app - app3 3 220 1 221 0 90 0.22 2022-11-10 client app CA sigin app - app4 2 1000 11 1011 0 99 0.45 2022-11-10 client app CA sigin app - app5 1 1089 12 2178 9 99 0.56 Query2: index=blrm-app-idx  sourcetype=blrm_appl  "QA: ab"  OR "QA: cd"  OR "QA: ef"  OR "QA: gh" (procedure=POST web="/cate/surface*")  OR  (procedure=GET web="/choc/plain/otp*")  OR  (procedure=POST web="/may/duration")  OR  (procedure=GET web="/year/days")  OR   web="/nam/power/error")  OR  web="/move/cloud")  OR   web="/mont/days NOT [|inputlookup  angryer.csv |rename PI as result |fields result] | eval  type=case((faildigit=978 OR faildigit=435 OR faildigit=543 OR faildigit=987), "error", (faildigit>98763 AND faildigit<98765 )  OR faildigit=123 OR faildigit=456 OR faildigit=789,  fvb,  isnull(faildigit), "pass",   isnotnull(faildigit), "error") |search pipe = "error" |eval operation=case(match(web,  "/cate/surface*"),  sigin app - customer,   procedure=GET AND  web="/choc/plain/otp*") ,  sigin app - application,   procedure=POST AND web="/may/duration",  sigin app - client,   procedure=GET  AND web="/year/days", sigin app - app1,  OR   web="/nam/power/error",  sigin app - app2  OR  web="/move/cloud", sigin app - app3 OR   web="/mont/days, sigin app - app4,  1=1, "unknown") |eval  web = procedure." ".web |eval sys= case (cliapp=ab,  applibase,   cliapp=cd,  cusbase,    cliapp=ef,  efffilm,   cliapp=gh    gohome,  1=1, null()) |bin _time span=1h |stats dc(cust_ip) as effect_apps  by _time sys operation web OUTPUT: _time sys operation web effect_apps 2022-11-10 UA sigin app - app1 1 10 2022-11-10 UA sigin app - app2 2 12 2022-11-10 UA sigin app - app3 2 10 2022-11-10 UA sigin app - app4 1 40 2022-11-10 UA sigin app - app5 4 2 2022-11-10 CA sigin app - app1 5 6 2022-11-10 CA sigin app - app2 8 1 2022-11-10 CA sigin app - app3 9 3 2022-11-10 CA sigin app - app4 2 4 2022-11-10 CA sigin app - app5 1 8
We have a rather huge solution with 2000+ servere.    Our company needs that we monitor the IIS logs. Problem is that not all server has IIS and some have logs on  C: drive and other on E: drive.... See more...
We have a rather huge solution with 2000+ servere.    Our company needs that we monitor the IIS logs. Problem is that not all server has IIS and some have logs on  C: drive and other on E: drive. We did then setup to monitor "E:\inetpub\logs\LogFiles" and "C:\inetpub\logs\LogFiles" on all severs.  It would be impossible to select just the servers with IIS running.   The result of this is that we do get lots of warning in our logs. Some server have logs in E and some in C and som do not have IIS logs, so logs are filled with this messages:   11-19-2022 16:22:04.199 +0100 WARN FilesystemChangeWatcher - error getting attributes of path "E:\inetpub\logs\LogFiles": The device is not ready.   1. Does this give any problem?  2. This clogs up the logs, so its harder to find real problem, so if this logs are not giving any problem, can it be turned of for some monitored files?
Hi Everyone, I am using the collect command to write data in summary index and it is giving the values properly when i run the search manually before using collect command but when i put the collec... See more...
Hi Everyone, I am using the collect command to write data in summary index and it is giving the values properly when i run the search manually before using collect command but when i put the collect command the values become 0.     Attached are the screenshots before collect and the job results of saved search If anyone has idea on this pls let me know.     Thanks
I have below json data:  { "source": "Mule", "sourcetype": "_json", "index": "metrics", "event": [ { "date": "2022-11-19T13:57:01.427+05:30", "businessGroup": "testgroup", "businessGroupId"... See more...
I have below json data:  { "source": "Mule", "sourcetype": "_json", "index": "metrics", "event": [ { "date": "2022-11-19T13:57:01.427+05:30", "businessGroup": "testgroup", "businessGroupId": "1234", "coreServicesMetrics": { "users": { "total": 38 }, "environments": { "total": 3 } }, "designCenterMetrics": { "total": 5 }, "exchangeMetrics": { "total": 1, "reuse": { "fragments": 0, "implementedApis": 0 } }, "apiManagerMetrics": { "clients": 0, "apis": { "production": { "total": 0, "contracts": 0, "policiesUsed": [ ], "policiesUsedTotal": 0, "automatedPoliciesUsed": [ ], "automatedPoliciesUsedTotal": 0, "transactions": 0, "details": [ { "environmentName": "PROD", "environmentType": "Production", "contracts": 0, "policiesUsed": [ ], "policiesUsedTotal": 0, "automatedPoliciesUsed": [ ], "automatedPoliciesUsedTotal": 0, "transactions": 0 } ] }, "sandbox": { "total": 0, "contracts": 0, "policiesUsed": [ ], "policiesUsedTotal": 0, "automatedPoliciesUsed": [ ], "automatedPoliciesUsedTotal": 0, "transactions": 0, "details": [ { "environmentName": "DEV", "environmentType": "Sandbox", "policiesUsed": [ ], "policiesUsedTotal": 0, "automatedPoliciesUsed": [ ], "automatedPoliciesUsedTotal": 0, "transactions": 0 }, { "environmentName": "TEST", "environmentType": "Sandbox", "contracts": 0, "policiesUsed": [ ], "policiesUsedTotal": 0, "automatedPoliciesUsed": [ ], "automatedPoliciesUsedTotal": 0, "transactions": 0 } ] } } }, "runtimeManagerMetrics": { "cloudhub": { "networking": { "vpcsTotal": 9, }, "applications": { "production": { "vcoresTotal": 31, "runtimesUsed": [ ], "runtimesUsedTotal": 0, "details": [ { "environmentName": "X", "environmentType": "X", "runtimesUsed": [ ], "runtimesUsedTotal": 0 } ] }, "sandbox": { "vcoresTotal": 14, "runtimesUsed": [ ], "runtimesUsedTotal": 0, "details": [ { "environmentName": "DEV", "environmentType": "Sandbox", "vcoresUsed": 0, "runtimesUsed": [ ], "runtimesUsedTotal": 0 }, { "environmentName": "TEST", "environmentType": "Sandbox", "runtimesUsed": [ ], "runtimesUsedTotal": 0 } ] } } } } }, { "date": "2022-11-19T13:57:01.427+05:30", "businessGroup": "MainGroup", "businessGroupId": "5678", "coreServicesMetrics": { "users": { "total": 16 }, "environments": { "total": 3 } }, "designCenterMetrics": { "total": 11 }, "exchangeMetrics": { "total": 11, "reuse": { "fragments": 10, } }, "apiManagerMetrics": { "clients": 0, "apis": { "production": { "total": 0 "contracts": 0, "policiesUsed": [ ], "policiesUsedTotal": 0, "automatedPoliciesUsed": [ ], "automatedPoliciesUsedTotal": 0, "transactions": 0, "details": [ { "environmentName": "X", "environmentType": "X", "policiesUsed": [ ], "policiesUsedTotal": 0, "automatedPoliciesUsed": [ ], "automatedPoliciesUsedTotal": 0, "transactions": 0 } ] }, "sandbox": { "total": 20, "contracts": 24, "policiesUsed": [ "client-id-enforcement" ], "policiesUsedTotal": 1, "automatedPoliciesUsed": [ ], "automatedPoliciesUsedTotal": 0, "transactions": 1499, "details": [ { "environmentName": "DEV", "environmentType": "Sandbox", "contracts": 11, "policiesUsed": [ "client-id-enforcement" ], "policiesUsedTotal": 1, "automatedPoliciesUsed": [ ], "automatedPoliciesUsedTotal": 0, "transactions": 585 }, { "environmentName": "TEST", "environmentType": "Sandbox", "contracts": 13, "policiesUsed": [ "client-id-enforcement" ], "policiesUsedTotal": 1, "automatedPoliciesUsed": [ ], "automatedPoliciesUsedTotal": 0, "transactions": 914 } ] } } } } ] }   I want to extract below data : event{where businessGroup="MainGroup"}.apiManagerMetrics.apis.sandbox.details{where environmentName="DEV"}.transactions using splunk query. Can you please help me with that  
Hi I have a dashboard that show several chart, some of these charts load slowly. 1-is it possible to put radio buttom that have two value like (OFF, ON) whenever user open dashboard won't load th... See more...
Hi I have a dashboard that show several chart, some of these charts load slowly. 1-is it possible to put radio buttom that have two value like (OFF, ON) whenever user open dashboard won't load those chart, when user hit on load them. 2-is it possible to set load priority that first load light chart, after load them completely start to load heavy one!       <panel>       <title>Number of Request Called From Webservice</title>       <chart>         <search>           <query>index="app" "INFO  [APP] [log]"  | rex "status\[(?&lt;status&gt;\w+)" | timechart count(status)  by status usenull=f useother=f limit=0</query>           <earliest>$tokTime.earliest$</earliest>           <latest>$tokTime.latest$</latest>           <sampleRatio>1</sampleRatio>         </search>         <option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option>         <option name="charting.axisLabelsX.majorLabelStyle.rotation">0</option>         <option name="charting.axisTitleX.visibility">collapsed</option>         <option name="charting.axisTitleY.visibility">visible</option>         <option name="charting.axisTitleY2.visibility">visible</option>         <option name="charting.axisX.abbreviation">none</option>         <option name="charting.axisX.scale">linear</option>         <option name="charting.axisY.abbreviation">none</option>         <option name="charting.axisY.scale">linear</option>         <option name="charting.axisY2.abbreviation">none</option>         <option name="charting.axisY2.enabled">0</option>         <option name="charting.axisY2.scale">inherit</option>         <option name="charting.chart">area</option>         <option name="charting.chart.bubbleMaximumSize">50</option>         <option name="charting.chart.bubbleMinimumSize">10</option>         <option name="charting.chart.bubbleSizeBy">area</option>         <option name="charting.chart.nullValueMode">gaps</option>         <option name="charting.chart.showDataLabels">minmax</option>         <option name="charting.chart.sliceCollapsingThreshold">0.01</option>         <option name="charting.chart.stackMode">default</option>         <option name="charting.chart.style">shiny</option>         <option name="charting.drilldown">all</option>         <option name="charting.layout.splitSeries">0</option>         <option name="charting.layout.splitSeries.allowIndependentYRanges">0</option>         <option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option>         <option name="charting.legend.mode">standard</option>         <option name="charting.legend.placement">bottom</option>         <option name="charting.lineWidth">2</option>         <option name="height">181</option>         <option name="refresh.display">progressbar</option>         <option name="trellis.enabled">0</option>         <option name="trellis.scales.shared">1</option>         <option name="trellis.size">medium</option>       </chart>     </panel>   </row>   any idea? Thanks
Hi! I would like to separate the field Privilegio   |---------------------------|-------------------------------------------------------------------------------------------|-------------| |   ... See more...
Hi! I would like to separate the field Privilegio   |---------------------------|-------------------------------------------------------------------------------------------|-------------| |   src_user                      |      Privilegio                                                                                                                     |   count      | |---------------------------|-------------------------------------------------------------------------------------------|-------------| |   user-RAC0308$      | SeSecurityPrivilege                                                                                                      |    8127     | |                                          |                                          SeBackupPrivilege                                                               |                    | |                                          |                                          SeRestorePrivilege                                                               |                    | |                                          |                                          SeTakeOwnershipPrivilege                                               |                    | |                                          |                                          SeDebugPrivilege                                                                  |                    | |                                          |                                          SeSystemEnvironmentPrivilege                                      |                    | |                                          |                                          SeLoadDriverPrivilege                                                         |                    | |                                          |                                          SeImpersonatePrivilege                                                      |                    | |                                          |                                          SeDelegateSessionUserImpersonatePrivilege          |                    | |                                          |                                          SeEnableDelegationPrivilege                                            |                    | |                                          |                                          SeCreateTokenPrivilege                                                      |                    | |                                          |                                          SeAssignPrimaryTokenPrivilege                                      |                     | |---------------------------|--------------------------------------------------------------------------------------------|--------------|   Since it only counts the first value and the others are put with a tab, they are the windows privileges of the EventID 4672,  my query is the following: index=oswinsec EventCode=4672 | stats values(PrivilegeList) as Privilegio count by src_user  
Hello Everyone, My aim is to get the dropdown selected value as a token so that I can use it in searchmanager query, also I am using dependent dropdown, so how to pass token of portfolio dropdown... See more...
Hello Everyone, My aim is to get the dropdown selected value as a token so that I can use it in searchmanager query, also I am using dependent dropdown, so how to pass token of portfolio dropdown to application_code dropdown and when portfolio dropdown value changes then application_code dropdown should repopulate and show the data related to current selected portfolio value, also the default value of application_code dropdown should be default. I am attaching the piece of code I am using, could someone please help me and explain how token works in js, I am new in splunk js. xml:   <form script="demo.js"> <label>Demo</label> <row> <panel> <html> <div id="mydropdownview1"/> <div id="mydropdownview2"/> <div id="mytimerangeview"/> <div id="singleid0"/> <div id="singleid1"/> <div id="singleid2"/> <div id="singleid3"/> <div id="singleid4"/> <div id="singleid5"/> <div id="singleid6"/> <div id="singleid7"/> <div id="singleid8"/> <div id="singleid9"/> <div id="singleid10"/> <div id="singleid11"/> </html> </panel> </row> </form>   demo.js   require(["splunkjs/ready!"], function (mvc) { var deps = [ "jquery", "splunkjs/mvc/dropdownview", "splunkjs/ready!", "splunkjs/mvc/searchmanager", "splunkjs/mvc/tableview", "splunkjs/mvc/singleview", "splunkjs/mvc/timerangeview", "splunkjs/mvc" ]; require(deps, function(mvc) { // var randomid = () => Math.random() const searchId = Date.now() + ''; const searchId1 = searchId + Date.now() + ''; const searchId2 = searchId1 + Date.now() + ''; const searchId3 = searchId2 + Date.now() + ''; const searchId4 = searchId3 + Date.now() + ''; const searchId5 = searchId4 + Date.now() + ''; const searchId6 = searchId5 + Date.now() + ''; const searchId7 = searchId6 + Date.now() + ''; const searchId8 = searchId7 + Date.now() + ''; const searchId9 = searchId8 + Date.now() + ''; const searchId10= searchId9 + Date.now() + ''; const searchId11= searchId10 + Date.now() + ''; const dropdownsearch1= searchId11 + Date.now() + ''; const dropdownsearch2= dropdownsearch1 + Date.now() + ''; var SearchManager = require("splunkjs/mvc/searchmanager"); var DropdownView = require("splunkjs/mvc/dropdownview"); var m = require("splunkjs/mvc"); var TimeRangeView = require("splunkjs/mvc/timerangeview"); var mychoices = [ {label:"ALL", value: "*"}, ]; // Access the "default" token model var tokens = m.Components.get("default"); // Retrieve the value of a token $mytoken$ // portfolio_token = tokens.get("portfolio_token"); /**Dropdowns */ var portfolio = new DropdownView({ id: "dropdownid1", managerid: dropdownsearch1, default: "demo_portfolio_value", labelField: "portfolio", valueField: "portfolio", el: $("#mydropdownview1") }).render(); new SearchManager({ id: dropdownsearch1, search: `| inputlookup demo_portfolio_filter.csv | table portfolio | dedup portfolio | sort portfolio` }); // defaultTokenModel.set("portfolio_token", "*"); // var portfolio_dropdown = tokens.get("portfolio_token") var application_code = new DropdownView({ id: "dropdownid2", choices: mychoices, managerid: dropdownsearch2, // default: "ALL", selectFirstChoice:"true", labelField: "application_name", valueField: "application_code", el: $("#mydropdownview2") }).render(); var portfolio_search = new SearchManager({ id: dropdownsearch2, search: `| inputlookup demo_portfolio_filter.csv |search portfolio=${portfolio.val()} |eval application_name=application_code."-".application_name |table application_name application_code |sort application_code` }); // Instantiate a view using the default time range picker var mytimerange = new TimeRangeView({ id: "example-timerange", managerid: "example-search", preset: "-4h@m", el: $("#mytimerangeview") }).render();   var Controlm_NOK = new SearchManager({ id: searchId, label:"NOK Percent Controlm", earliest_time: mytimerange.val().earliest_time, latest_time: mytimerange.val().latest_time, search: `somebasesearch | stats values(percentage) by application_code`, preview: true, autostart: true, cache: true }); /**OK Percent  */ const search1 = new SearchManager({ id:searchId1, label:"OK Percent Controlm", earliest_time: mytimerange.val().earliest_time, latest_time: mytimerange.val().latest_time, search: `somebasesearch | stats values(OK_Percentage) by application_code`, preview: true, autostart: true, cache: true });   var SingleView = require('splunkjs/mvc/singleview'); new SingleView({ id: "single0", managerid: searchId, underLabel:"singleview nok", colorMode: "block", drilldown: "none", rangeColors: "[\"0x6db7c6\",\"0x65a637\",\"0xf7bc38\",\"0xd93f3c\"]", rangeValues: "[0,80,95]", useColors: true, "trellis.enabled": true, "trellis.splitBy": "Location_Name", "trellis.size": "medium", el: $("#singleid0") }).render(); var SingleView = require('splunkjs/mvc/singleview'); new SingleView({ id: "single1", underLabel:"singleview ok", managerid: searchId1, colorMode: "block", drilldown: "none", rangeColors: "[\"0x6db7c6\",\"0x65a637\",\"0xf7bc38\",\"0xd93f3c\"]", rangeValues: "[0,80,95]", useColors: true, "trellis.enabled": true, "trellis.splitBy": "Location_Name", "trellis.size": "medium", el: $("#singleid1") }).render();   }); });        
We are using the latest version of the Splunk Add-On for Salesforce. The integration account we are using on the Salesforce side is setup with the System Administrator profile, which is working fine,... See more...
We are using the latest version of the Splunk Add-On for Salesforce. The integration account we are using on the Salesforce side is setup with the System Administrator profile, which is working fine, but because of the elevated access, we don't want to use that. We have a more limited profile that seems to have all the correct permissions, but when we look at the Event Log data that comes over, it only contains events for the integration account, and no other users. I've compared the profiles and the only difference I see is that Sys Admin has access to all objects. Not sure why that would matter though. Any ideas on what permission we might be missing?
Up to 8.5, I had no problem downloading results.  In 9.0.1, the server returns <response>   <messages>     <msg type="ERROR">Service Unavailable</msg>   </messages> </response> whenev... See more...
Up to 8.5, I had no problem downloading results.  In 9.0.1, the server returns <response>   <messages>     <msg type="ERROR">Service Unavailable</msg>   </messages> </response> whenever I try to export (download), whether from search window or from dashboard.  The problem, it seems, is that  /servicesNS/admin/search/search/jobs/<job id>/results/export invokes a python script that thinks my server_hostname is 127.0.0.1 (localhost) when the server's certification is for the server name. (I am using a publicly signed custom cert.)  Does anyone else get this problem?  How do you fix this? To test, I run a simple search "| tstats count where index=_internal", then click the download/export button.  The server then returns the above error message.  web_service.log shows these errors:   2022-11-18 21:24:52,377 INFO [6377f8245c7fc3f4089c10] startup:139 - Splunk appserver version=9.0.1 build=82c987350fde isFree=True isTrial=False 2022-11-18 21:24:52,415 ERROR [6377f8245c7fc3f4089c10] __init__:868 - Socket error communicating with splunkd (error=[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: IP address mismatch, certificate is not valid for '127.0.0.1'. (_ssl.c:1106)), path = /servicesNS/admin/search/search/jobs/1668806682.36752/results/export?output_mode=csv&f=count&output_time_format=%25Y-%25m-%25dT%25H%3A%25M%3A%25S.%25Q%2B0000 2022-11-18 21:24:52,416 ERROR [6377f8245c7fc3f4089c10] decorators:318 - Splunkd daemon is not responding: ("Error connecting to /servicesNS/admin/search/search/jobs/1668806682.36752/results/export?output_mode=csv&f=count&output_time_format=%25Y-%25m-%25dT%25H%3A%25M%3A%25S.%25Q%2B0000: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: IP address mismatch, certificate is not valid for '127.0.0.1'. (_ssl.c:1106)",) Traceback (most recent call last): File "/opt/splunk/lib/python3.7/site-packages/splunk/rest/__init__.py", line 850, in streamingRequest conn.connect() File "/opt/splunk/lib/python3.7/http/client.py", line 1451, in connect server_hostname=server_hostname) File "/opt/splunk/lib/python3.7/ssl.py", line 428, in wrap_socket session=session File "/opt/splunk/lib/python3.7/ssl.py", line 878, in _create self.do_handshake() File "/opt/splunk/lib/python3.7/ssl.py", line 1147, in do_handshake self._sslobj.do_handshake() ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: IP address mismatch, certificate is not valid for '127.0.0.1'. (_ssl.c:1106) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/decorators.py", line 304, in handle_exceptions return fn(self, *a, **kw) File "</opt/splunk/lib/python3.7/site-packages/decorator.py:decorator-gen-1471>", line 2, in getJobAsset File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/decorators.py", line 359, in apply_cache_headers response = fn(self, *a, **kw) File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/controllers/search.py", line 392, in getJobAsset return self.streamJobExport(job, asset, **kwargs) File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/controllers/search.py", line 154, in streamJobExport stream = rest.streamingRequest(uri, getargs=getargs, postargs=postargs, timeout=export_timeout) File "/opt/splunk/lib/python3.7/site-packages/splunk/rest/__init__.py", line 869, in streamingRequest raise splunk.SplunkdConnectionException('Error connecting to %s: %s' % (path, str(e))) splunk.SplunkdConnectionException: Splunkd daemon is not responding: ("Error connecting to /servicesNS/admin/search/search/jobs/1668806682.36752/results/export?output_mode=csv&f=count&output_time_format=%25Y-%25m-%25dT%25H%3A%25M%3A%25S.%25Q%2B0000: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: IP address mismatch, certificate is not valid for '127.0.0.1'. (_ssl.c:1106)",)   Given no such problem before Splunk 9, I surmise that Splunk 9 added some server_name configuration for download that is not in my localization.  But I cannot find any.  My local configs are server.conf - which contains serverName property that points to my server's name.   [general] serverName = <my server domain> pass4SymmKey = <some key> [sslConfig] sslVerifyServerCert = true #cliVerifyServerName = true # SSL settings sslPassword = <some password> serverCert = /var/opt/<some file path>.crt caCertFile = /etc/pki/tls/certs/ca-bundle.crt   web.conf - contains no name, so I assume it uses serverName from server.conf   [settings] httpport = 443 enableSplunkWebSSL = true privKeyPath = /var/opt/<some file path>.key serverCert = /var/opt/<some file path>.crt  
I have this query index = tenable sourcetype="tenable:io:vuln" state!=fixed eventtype="*" | dedup dns_name plugin.id | eval discovery = strptime(last_found, "%Y-%m-%dT%H:%M:%S.%3N%Z") - strptime(fi... See more...
I have this query index = tenable sourcetype="tenable:io:vuln" state!=fixed eventtype="*" | dedup dns_name plugin.id | eval discovery = strptime(last_found, "%Y-%m-%dT%H:%M:%S.%3N%Z") - strptime(first_found, "%Y-%m-%dT%H:%M:%S.%3N%Z") | eval Age = round(discovery / 86400, 2) | eval first_found=strftime(strptime(first_found,"%Y-%m-%dT%H:%M:%S.%3N"),"%d-%B-%y") | eval last_found=strftime(strptime(last_found,"%Y-%m-%dT%H:%M:%S.%3N"),"%d-%B-%y") | table plugin.id dns_name first_found last_found Age check_type category severity I am trying to create a trending chart that shows the number of plugin.id  by week for the past 30 days.
Normally, for splunk  dashboard we will save it with file extension .xml. And we will promote the changes via git  Here i want to know about the splunk dashbaord studios, what is the file extens... See more...
Normally, for splunk  dashboard we will save it with file extension .xml. And we will promote the changes via git  Here i want to know about the splunk dashbaord studios, what is the file extension, and once after creating the dashbaord via UI , i want to promote those dashbaords. how can i do that ??????
Paranumber    Name 95929              Magnolia Jones Sr. 35716              Leslie Streich 99265              Magnolia Jones Sr. 152743            Kacey Cartwright 99265              Terenc... See more...
Paranumber    Name 95929              Magnolia Jones Sr. 35716              Leslie Streich 99265              Magnolia Jones Sr. 152743            Kacey Cartwright 99265              Terence Deckow 95929              Magnolia Jones Sr. 131568            Dr. Ubaldo O'Kon 95929              Miss Maegan Adams 95929              Magnolia Jones Sr. 110231            Charley Casper How can i remove duplicates only where the two columns natch.  for example,  95929              Magnolia Jones Sr. I want to remove duplicate of the entire row not by columns but by columns.  
Splunk_TA_stream and UF are installed on server with MySQL 8. Splunk App for stream can recognize traffic with protocol_stack ip:tcp:mysql.  However it can't recognize traffic with protocol_stack  ... See more...
Splunk_TA_stream and UF are installed on server with MySQL 8. Splunk App for stream can recognize traffic with protocol_stack ip:tcp:mysql.  However it can't recognize traffic with protocol_stack  ip:tcp:ssl:mysql. is there any workaroud how to see SELECT query in captured traffic with protocol_stack  ip:tcp:ssl:mysql ?
Hi Community,   I have a use case where the client needs data to be stored over an extended period of time. The main objective is to test if I can use a combination of SSD and HDD for this use ca... See more...
Hi Community,   I have a use case where the client needs data to be stored over an extended period of time. The main objective is to test if I can use a combination of SSD and HDD for this use case. Also, make sure that when the data is available in the HDD over the Network File System, the dashboards powered by data models work. Since the client wants data to be available for at least 6 months, an index is created that has hot/warm buckets in SSD and cold buckets in slower storage connected over the Network File System. For this test, I have created a new index that has a bucket size of 500 Mb and the number of hot buckets is 2 and there are no warm buckets. I have made no other changes to the default configurations. When the data is available in hot/warm buckets, the data models work and the panels are loaded. But when the data is moved to HDD, the panels don't work.  I get an error as attached above. When I run a search query I can see data. Could you please let me know how can I fix this issue?   Regards, Pravin