All Topics

Top

All Topics

Hello Splunkers, I have a field called state_sinfo which have values like (up,up*,up$,up^,continue,continue$,continued,continied$,down,down%,down#,drop,drop*,drop$) I want to categorize certain v... See more...
Hello Splunkers, I have a field called state_sinfo which have values like (up,up*,up$,up^,continue,continue$,continued,continied$,down,down%,down#,drop,drop*,drop$) I want to categorize certain values of state_sinfo as like below available (up,up*,up$,up^,continue,continue$,continued,continied$) not_available(down,down%,down#) down(drop,drop*,drop$) Then I want to calculate the sum  of all categories by time Lastly I want to calculate the  percentage  | eval "% available" = round( available / ( available + drop ) * 100 , 2) | eval "% drained" = round( drop / (available + drop ) * 100 , 2) Sample event   slu_ne_state{instance="192.1x.x.x.",job="exporters",node="xyz",partition="gryr",state_sinfo="down",state_sinfo_simple="maint"} 1.000000 1676402381347 Thanks In advance 
We upgraded our Splunk Enterprise from v8.2.5 to v9.0.1.  When we did, it broke the Add-on for Microsoft 365. Every time a connection is made to microsoft we see this SSL error: SSLError(SSLCertVer... See more...
We upgraded our Splunk Enterprise from v8.2.5 to v9.0.1.  When we did, it broke the Add-on for Microsoft 365. Every time a connection is made to microsoft we see this SSL error: SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1106)'))) Has anyone run into this before?    
Hello, I'm a new Splunk Compliance Manager and I need some assistance. How do i check Splunk Compliance and how do i better manage licensing?   Thanks, Rodney
By Fabrizio Ferri Benedetti, Robin Pille, and Christopher Gales As part of our long-term documentation strategy, now anyone, from Splunkers to Splunk users, can improve the Splunk Observability Clo... See more...
By Fabrizio Ferri Benedetti, Robin Pille, and Christopher Gales As part of our long-term documentation strategy, now anyone, from Splunkers to Splunk users, can improve the Splunk Observability Cloud documentation by adding examples, documenting new settings, or fixing issues. By opening the Observability docs for community contribution,  we’ve made it easier to ensure we’re providing the highest quality, most up-to-date content for you to use every day. All you have to do is select the Edit  this page link on any page of the Observability Cloud documentation. The Splunk documentation team believes that the product is docs, and that great docs are key to the success of great software and the success of its users. Together with the existing Feedback feature, the Edit this page link is the simplest and most direct way to ensure that we’re providing the highest quality, most up-to-date product documentation. The customers, partners, and Splunkers who comprise our community (that’s you!) are passionate about sharing their knowledge and experience to help everyone successfully implement and use Splunk software. You have proven that you’re invested in our documentation and want to work with us to make it great. That's why we're giving you a new way to contribute directly.   Why we’re opening the docs for community contributions   Open source contributions and docs-as-code principles are at the core of how the Observability docs team works. Prior to now, the docs were in an internal repository that let any Splunk employee review and edit the docs. Making the jump to publicly available source is a natural progression in our journey to empower users, and the first step in making more Splunk documentation content available for community contribution. Opening the docs to community contributions also improves our efficiency and responsiveness. Every month we get dozens of feedback tickets from Splunkers and customers. Many of them are quick corrections that our feedback submitters could solve immediately if they had direct access to the source files. By giving you, the user, the power to edit the docs, we’re increasing the speed of improving the docs and creating a more effective feedback loop.   All things small and beautiful...and editable   What are the types of things you can contribute to the Splunk Observability Cloud documentation? There are many! Docs require constant care and we always strive to improve the relevance and usefulness of our content. In only a few minutes of your time, you can: Add new code snippets and examples for our instrumentation and agents. Update broken links that point to Splunk documentation or external sites. Correct incorrect information, like a wrong default value or command. Update outdated screenshots and animations to reflect the current UI. Fix formatting glitches, wrong capitalization, duplicated words, and more. You can also submit deeper changes, such as conceptual explanations, diagrams, and even entire new topics that can benefit all users. If you’re entering the technical writing profession, adding docs to our repository is a fantastic way of adding new samples to your documentation portfolio. There’s a free trial of Splunk Observability Cloud you can use to get started. If you’re unsure about a change or have a different question about the docs, you can still always use the Feedback button at the bottom of every page to let us know of a documentation issue, and we’ll do our best to respond as soon as possible, just as we’ve always done. Now, though, you have an alternative: you can submit the changes yourself.   How to get started with docs contributions   On every page of the Observability Cloud docs site you can find an Edit this page link. Select the link to load the source of the document in a GitHub preview. The only prerequisite to edit the docs is having a free GitHub user account. After you’re done with your edits, GitHub prompts you to open a pull request and fill out the description of the changes using a template. Within 72 hours, we’ll make sure to review your pull request and publish it on the site if everything looks good. Should we reject it, we will tell you the reason and continue the discussion. You can learn more about how to build and test the docs locally, as well as our review criteria, in the CONTRIBUTING.md file of our repository. The file contains basic reStructuredText guidance and links to the Splunk Style Guide, which provides the guidance we use to create consistent, high-quality docs. Are you ready to help us bring our docs to the next level?     $ git add all $ git commit -m "Let's write the o11y docs" $ git push    
Hi, i do know that you simply can add a html part above the charts to replace the actual panel title and that class used in the html part is easily centered. However it is not as wide as the chart ... See more...
Hi, i do know that you simply can add a html part above the charts to replace the actual panel title and that class used in the html part is easily centered. However it is not as wide as the chart below. You also can use css to change the style of the panel title itself, but fpr some reason it has a weird offset and is not truly centered. The same is true for chart titles... Back in a project i found a way to use flexbox to truly center the native .panel-title style But for the hell of me, i can not put it back together. The problem is that the flexbox itself is already not centered in the panel header. Any ideas are greatly appreciated.   Kind regards, Mike
I have a lookup which I want to compare search results against and find duplicate values.   How do I ignore duplicates found that already exist in my dataset? And only identify duplicates using resul... See more...
I have a lookup which I want to compare search results against and find duplicate values.   How do I ignore duplicates found that already exist in my dataset? And only identify duplicates using results from my search compared to the dataset? index=myindex sourcetype=k_logs (ns4:phoneNo OR emailInfo OR address) AND DummyOrgName AND "<requestType>UPDATEUSER</requestType>" | xmlkv | rename "ns4:phoneNo" as phoneNo | search orgName = "DummyOrgName" | eval phoneAndEmail= coalesce(phoneNo, address) | fields phoneAndEmailphoneNo address ipAddress userName | table phoneAndEmailphoneNo address ipAddress userName | append [|inputlookup thisLookup.csv | table phoneAndEmail phoneNo address ipAddress userName] | stats values(phoneNo) AS phoneNo values(address) AS address values(ipAddress) AS ipAddress values(userName) AS userName dc(userName) AS UserCount by phoneAndEmail | where UserCount>2
I need to provide audit details on our ES Content Library. Using rest, I can identify searches that have been updated and when they were updated, but the rest call only reports on the owner of the se... See more...
I need to provide audit details on our ES Content Library. Using rest, I can identify searches that have been updated and when they were updated, but the rest call only reports on the owner of the search, not the person who made the change. | rest splunk_server=local /servicesNS/-/-/saved/searches | fields title search eai:acl.owner eai:acl.app alert_type updated cron_schedule auto_summarize.suspend_period dispatch.earliest_time dispatch.latest_time id | convert timeformat="%Y-%m-%dT%H:%M:%S+00:00" mktime(updated) | where updated >= relative_time(now(), "-4h") Looking at conf.log I can see when a search was written: index="_internal" source="/opt/splunk/var/log/splunk/conf.log" earliest=-30h WRITE_STANZA | stats values(data.optype_desc) values(data.payload.children.action.correlationsearch.label.value) values(data.payload.children.search.value) Neither of these searches tell me who was the individual writing the search. Any other ideas as to how I can accomplish this? Thank you.
I have a dataset which has a column "Port" that contains (limited) numerical values.  I want to make these values display as text (e.g. 443 == HTTPS).  I could do this in Excel but I'm a Splunk newbi... See more...
I have a dataset which has a column "Port" that contains (limited) numerical values.  I want to make these values display as text (e.g. 443 == HTTPS).  I could do this in Excel but I'm a Splunk newbie and frankly in need of a nudge in the right direction....I assume it would be some kind of lookup? I would then pull the text values into a pivot for a dashboard to replace my current one with the port numbers. Kudos and virtual shiny things for anyone who can help
Probably a silly question but for the life of me can't find documentation - when we create a code block, how do we call the the name of said code block? Currently want to add extra information in my ... See more...
Probably a silly question but for the life of me can't find documentation - when we create a code block, how do we call the the name of said code block? Currently want to add extra information in my error handling and want to include the name of the code block that it occurs under. I tried variations of phantom.name, custom_function__name, custom_function.name/(), self.name, etc. Any help is appreciated!
Hi, I have a lookup definition that look like that: When I'm running this search with looking up in this lookup difinition, I'm getting the wider subnet.       index="FW" action=allo... See more...
Hi, I have a lookup definition that look like that: When I'm running this search with looking up in this lookup difinition, I'm getting the wider subnet.       index="FW" action=allowed src_ip=10.0.0.1 sourcetype=fw | lookup ipam subnet AS src_ip OUTPUT subnet AS "Source Subnet" | table src_ip "Source Subnet" dest_ip Service Protocol app Rule Device _time | sort 0 -_time       The ipam lookup contains amount of subnets that contatining each other (for example 10.0.0.0/16, 10.0.0.0/24). The results that I'm getting is for the wider subnet, in my example - 10.0.0.0/16. Is there a way to choose the smaller subnet that contains the src_ip? Thanks  
I want to write a rex to extract values in a field that are delimited by comma. index=group sourcetype="ext:user_accounts" | rex field=Ldap_group "[,\s]+(?<Ldap_group>[^,]+)" | stats values(Lda... See more...
I want to write a rex to extract values in a field that are delimited by comma. index=group sourcetype="ext:user_accounts" | rex field=Ldap_group "[,\s]+(?<Ldap_group>[^,]+)" | stats values(Ldap_group) AS Ldap_group by elid, full_name The regex I wrote only gave me few values, not all of it. I wanted all values in Ldap_group  to be written separately in different rows . Requesting assistnce Field name: Ldap_group values:   MSV_EM_IMPKliAy_Standard_App,MSV_EM_IM_Federated,MSV_AAD_WkfKBarrier_Enabled,ADTestVrpVen5_23,V-IDaaS_ServiAeNKw_VKKd_Users,DTAA_ADT_AZAD_LIA_SKU_KffiAe365_Teams,MSV_EM_IM_PKKl02_Users,DTAA_EAK_HiplWkkSuppKrt_QA SPLVRP001-16,PRV_EAK_AS_SRV_HiplWkkSuppKrt_QA,ADTestVrpVen5_23 Wave_WkterAede MyID DesktKp DSK,AppliAatiKnSuppKrtEnVWkeer,KPS-VanBeurionEESSP-KF-3,VAAT-WARP ManaVementMKdule,DTAA_JPT_ITMP_SN_SVAKPS_IP_MAJKR_WkA_MANAVER,AharlKtteDiversityTeam-3,V-KPS-TEAHNKLKVY TMS SP-AN-4,DEM_WalkMe WalkMe ExtensiKn,EES1225AIBBldV31,DTAV_EAK_EAAK(),DTAA_APD_ATK_EANF_PRKD_users,DTAA_VP_EUA_HAPA_FR_PermDisable,ENT-TeAhnKlKVy-All-4,Wave_SimKn Tatham Putty x86,ETIFTE-1,DTAA_EIT_AAV_IdaaS_JPTLearner,DTAA_AFV_ITE_TEAH_PIBI_Users,APP_HitaAhi Vantara HAP Anywhere 4.5.0.4,Tera-Partners-24,APP_KraAle Java JDK 8uXXX -X86-,MSV_EM_IM_Federated,DTAA_EIT_EAAA_EAS_IDaaS_lKVWk,SP-PermissiKns-TimSlKanKrV-32,DTAA_EIT_TRIAV_Users,DP-TeAhnKlKVyVanBeurion-4,DTAA_NSK_IAS-SNVA_Default,MSV_EM_IM_VrKupAhat,AXAlients-32,V-SP_TEAH_FTE-3,APP_M365 KffiAe - MKnthly Enterprise Ahannel,V-AIA TEAM MEMBERS-2,DTAA_KRA_PRPX_BusKwnerEdit,DTAA_EIT_WWARP_View_AAAess,DTAA_AAK_ARS2S_JP_AKntraAt_ExAeptiKn,APP_SimKn Tatham PuTTY -X86-,V-KTV-TeAhnKlKVy-4,V-TIS-EPS-All-1,DTAA_AKK_TRIMS_VrKup_RelatiKnship_ManaVer_JPT,iPhKneUsers_VKKd_BYKD,APP_REALTEK USB VBE DRIVER 23.50.0211.2022,DTAA_AKK_TRIMS_VrKup_EnVaVement_ManaVer_JPT,DTAA_EIT_PMT_BPAN_IDaaS_ReadKnly,DKE-SP-TeAh_FTE-3,DTAA_AFV_EAPT_User,DTAS_NSK_IAS_ValidatiKnKnly1,NKtReVulatedUsersTKJKurnal-40,DTAA_AFV_EAPT_User_AKnfiiontial,MSVWk_AD_DEV_iKS_BYKD,DTAA_AFV_1WkV_AAAESS,APP_SynaptiAs DisplayLWkk VraphiAs 23.3.6400.0,ExAhanVeTeAhALT_AIA_FTE,DTAA_EIT_PSVHT_IdaaS_JPTLearner,Wave_MiArKsKft .NETDesktKpRuntime x64,PMSV-eaAKAKn-SendAs,DTAA_AFV_ADMF_TMDL_Kwner,DTAA_JPT_ITMP_SN_ITIL_USER_TEAHNKLKVY,DTAA_ADT_AZAD_LIA_SKU_KffiAe365_Teams,DTAA_TIV_Tab_ADMF_MAD_Wkt,1AAAAllUsers-31,MSV_AAD_WkfKBarrier_Enabled,DTAA_TIV_Tab_EAAK_EPPIA_Wkt,APP_WaaS_JP_Wksiders,Wave_ZKKm VideK AKmmuniAatiKns,AharlKtteDireAtKry-5,WAV_PRD_NP_1_TM_KX_Primary,MSVWk_ADT_AZAD_LIA_SKU_KffiAe365_Wktune,SredMyAppsMKbile,DTAA_TIV_Tab_ADMF_TMDL_Wkt,DTAA_AHS_PKrtal_IE_HKME,MSVTP_AallWkV_Private,PMSV-EAAKMessaVWkV-SendAs,DTAA_NSK_Wkternal_SKAial_AllKwed_Users,WkteVratedMarketWkVAllExAeptTellers-30,DTAA_AFV_MIM_TMIM_BUSWkESS_UNIT_KPERATKR,MSVTP_MessaVWkV_AhatKn,V-ETI TEAHNKLKVY FTE-3,Wave_VKKVle AhrKme,DTAA_VP_EUA_HAPA_FR_RemKval,DTAA_EIT_TRIAV_RepKrts,PMSV-eaAKAKn,SP_ALM_Read_AAAess_FWk_TeAh_DL-4,PriKrity_RemKte_AAAess_EAAK_Tier1,EITAll-4,APP_ZKKm ZKKm,MSVTP_MeetWkV_App_Aud_Vid_ExtAKnf,saEionTAKnneAt,Wave_AisAK Jabber,Wave_WkterAede MyID WWkdKws WkteVratiKn ServiAe,DTAA_ENT_HAPA_PKD0033,IMAKrpKrateAll-29,JP-TeAhnKlKVy-All-FTE-3,MSV_EM_IM_PKKl05_Users,V-SIFFERMAN FTE,EES1225AIBBldV3,MSV_EM_IMPKliAy_Standard_App_Aud_Vid_ReA_DialWk_ExtAKnf,DTAA_APD_ATK_EJRA_BSD_PRKD_JSW_users,LeVal_TeAhnKlKVy-4,Wave_KraAle Java JDK 8U x86,DEM_MiArKsKft EdVe WebView2 Runtime,DTAA_TKV_Pixel_Users,MSVTP_MeetWkV_App_Aud_Vid,VADI-RKKtTeamsPrKxyExAeptiKn,PilKt_MKbile_Users_Teams,DTAA_TIV_Tab_ADMF_DMI_BMD_Wkt,DTAA_WkD_1DIM_USER,SP-TS-All-32,AMTRADS-AllSaul-4,PMSV-EAAKMessaVWkV,Wave_WkterAede MyID Self-ServiAe App,RMSShare-45,DTAA_AFV_EAPT_VlKbalRead,APP_WktradK 911 LKAatiKn ManaVer 1.7.1,DTAA_TIV_Tab_EDQ_WkT
Two things! 1) I've created a Data Collector which collects data from "Method Parameter @index: 1" from the below. I'm getting an array with the data collected and as I understand it I need to con... See more...
Two things! 1) I've created a Data Collector which collects data from "Method Parameter @index: 1" from the below. I'm getting an array with the data collected and as I understand it I need to configure a Getter Chain to get single items as results - how do I do this!? Class: OpenAPI.Class01.Class02 Method: final native public IResult`1 CreateCard( Int32 customerId , Int32 abc , UInt32 from , UInt32 to , UInt32[] zones , System.DateTime fromDate , System.DateTime toDate , System.String transactionId ) 2) Is it possible to create a Data Collector which collects both the result and the parameter(s)?
Hello again, my apologies for all of these questions. I have a lookup table called login_sessions.csv which will keep track of allowed login sessions. It has the following columns UID, sessionstart... See more...
Hello again, my apologies for all of these questions. I have a lookup table called login_sessions.csv which will keep track of allowed login sessions. It has the following columns UID, sessionstart, and sessionend. I would like to add and remove entries to the lookup table depending on the value of a field called "action" in the events. If the value of action is "login" then I would like to add the userID, session_start, session_end fields from the event into the login_sessions.csv lookup, and if the value is "logoff" then I would like to remove the existing entry from the lookup. I was hoping I could use something like an if or case statement to do this, but I have only seen them used with eval and I haven't had much luck so far. E.G. if(action=="login", (inputlookup append=true login_sessions.csv | eval UID=userID, sessionstart=session_start, sessionend=session_end | outputlookup login_sessions.csv))   Is there a way to do this in a search? Thank you for any assistance.
I have a few files in which the log events happen to not be in chronological order. Specifically, an event with say, timestamp "2022-01-01 11:00:00" may occur towards the top of the log, while a di... See more...
I have a few files in which the log events happen to not be in chronological order. Specifically, an event with say, timestamp "2022-01-01 11:00:00" may occur towards the top of the log, while a different event (with a different event message) with the same timestamp may occur towards the bottom of the log. It is totally acceptable to have log events where the timestamps are exactly equal. What splunk is doing however, is merging all of these "distributed" events together into one single event. This should not happen. These are my config files:     props.conf [mySourceType] # example: 2022-07-01T23:53:54 2022-07-01T23:53:54 TIME_FORMAT = %Y-%m-%dT%H:%M:%S REPORT-default = sourcefields-default transforms.conf [sourcefields-default] SOURCE_KEY = source REGEX = /files/(.*?)/(.*?)/(.*?)/(.*?)\-(.*) FORMAT = field1::$1 field2::$2 field3::$3 field4::$4 field5::$5      
Hi, I have search which has S_host name values of different DB instances say MSSQL and Oracle in a single field. eg: S_Host Name has values such as 11xx 22xx 11yy 22yy And, I have the seperate ... See more...
Hi, I have search which has S_host name values of different DB instances say MSSQL and Oracle in a single field. eg: S_Host Name has values such as 11xx 22xx 11yy 22yy And, I have the seperate lookups for both MSSQL & Oracle ie., lookup1 & lookup 2 lookup 1 contains   hostname supportgroup serviceoffering 11xx random support group1 random service offering1 22xx random support group2 random service offering2   lookup 2 contains   hostname serviceoffering supportgroup 11yy random service offering1 random support group1 22yy random service offering2 random support group2   My base search is   index=a sourcetype="a" "field_name"="random_value" | dedup "IP" | stats values("S_Host Name") as "S_Host Name" by "IP"   Now I have to join like this   index=a sourcetype="a" "field_name"="random_value" | dedup "IP" | stats values("S_Host Name") as "S_Host Name" by "IP" | join type=left "S_Host Name" ( [|inputlookup lookup 1 |fields hostname serviceoffering supportgroup | rename hostname as S_host Name] [|*inputlookup lookup 2 |fields hostname serviceoffering supportgroup | rename hostname as S_host Name])   But the above search is not working... Can someone help me with this?
I am looking at event data.  I can group the data by hour like this: index=wineventlog EventCode=4740 Caller_Computer_Name=SERVER14 Account_Locked_Out_Name=USER12 | TIMECHART SPAN=1h count BY Caller... See more...
I am looking at event data.  I can group the data by hour like this: index=wineventlog EventCode=4740 Caller_Computer_Name=SERVER14 Account_Locked_Out_Name=USER12 | TIMECHART SPAN=1h count BY Caller_Computer_Name but that gives me an hour for each day, so hundreds of rows. I want 24 rows.  i.e. I want all events that occur between Midnight and 1am, on any day, in the first row; and then all events between 1am and 2am, on any day, in the second row; and so on. I've 
Hi Greatly appreciate your help, would like to know if there is any way i could filter out a value based from another column I need to filter out anything that column 2 gives to column1  sampl... See more...
Hi Greatly appreciate your help, would like to know if there is any way i could filter out a value based from another column I need to filter out anything that column 2 gives to column1  sample: column1 column2 apple orange grapes grapes     expected output The grape should be removed from column1 column1 column2 apple orange grapes   will i use where or mvfilter ?  | where column!=column1 Thank you in advance
Hi, I have logs separated by a tab. I have defined FIELD_DELIMITER=tab, INDEXED_EXTRACTIONS=tsv FIELD_NAMES etc in props.conf accordingly. I now need to extract more fields in one of the fields usin... See more...
Hi, I have logs separated by a tab. I have defined FIELD_DELIMITER=tab, INDEXED_EXTRACTIONS=tsv FIELD_NAMES etc in props.conf accordingly. I now need to extract more fields in one of the fields using regex. What is the most sensible and efficient way to do this? Is it possible to do this in props.conf at the same time when tsv splitting happening? Or is there only the possibility to use "rex field=" @ searchtime BR Max
I have a scheduled savedsearch that may return a result such as this _time, host, _raw 2023-01-01, host A, <some message> 2023-01-02, host A, <some message> 2023-01-03, host A, <some messag... See more...
I have a scheduled savedsearch that may return a result such as this _time, host, _raw 2023-01-01, host A, <some message> 2023-01-02, host A, <some message> 2023-01-03, host A, <some message> In this example, the content of <some message> causes an alert to fire, which is what I expect. Now, assume that a new event occurs and the next scheduled search returns this (changes in bold): 2023-01-01, host A, <some message> 2023-01-02, host A, <some message> 2023-01-03, host A, <some message> 2023-01-04, host A, <some message> 2023-01-05, host A, <some message> Problem: The next scheduled search will return the entire list (5 events) and thus trigger an alert containing these 5 events. However, 3 of these events were contained in a previous alert and are thus superfluous. Desired outcome: The new alert should only be triggered based on the two "new" events (in bold) What I have tried: Set trigger type to "for each event" and suppress for fields _time and host because I would assume that the combination of _time and host will uniquely identify the event to suppress I also tried to learn about dynamic input lookups, but the documentation seems to be lost / unavailable (http://wiki.splunk.com/Dynamically_Editing_Lookup_Tables)
We have been trying to ingest aws eventbridge events to splunk cloud using API destination partners provided by aws but when are trying to ingest the data using the url https://SPLUNK_HEC_ENDPOINT:op... See more...
We have been trying to ingest aws eventbridge events to splunk cloud using API destination partners provided by aws but when are trying to ingest the data using the url https://SPLUNK_HEC_ENDPOINT:optional_port/services/collector/raw. The data has been ingested to the index="main" index. but we need to ingest data to a different index can someone help how this can be performed