All Topics

Top

All Topics

We have two indexers, one version 8.1.5 (which will not be updated soon) and version 9.1.0.1 I see 9 has a nice feature "Ingest actions" which is exactly what I need to mask some incoming Personal I... See more...
We have two indexers, one version 8.1.5 (which will not be updated soon) and version 9.1.0.1 I see 9 has a nice feature "Ingest actions" which is exactly what I need to mask some incoming Personal Information (PI). It is coming in JSON and looks something like: \"addressLine1\":\"1234 Main Street\", I need to find some fields and remove the content. Yes I believe there are backslashes in there. I tested a regex on 9 and added to the transforms.conf and props.conf files on our 8.1.5 indexer but the rules didn't work. In one of my tests the rule caused an entire log entry to change to "999999999", not quite what I was expecting but now we know Splunk was applying the rule. This is one of my rules that had no affect: [address_masking] REGEX = (?<=\"addressLine1\":\")[^\"]* FORMAT = \"addressLine1\":\"100 Unknown Rd.\" DEST_KEY = _raw Found docs, looking at them now: Configure advanced extractions with field transforms - Splunk Documentation Can I get someone point out what is wrong with the above transform? Thanks!    
I have a lookup table I am using to pull in contact information based on correlation of a couple of fields. The way the lookup table is formatted, it makes my results look different than what I want ... See more...
I have a lookup table I am using to pull in contact information based on correlation of a couple of fields. The way the lookup table is formatted, it makes my results look different than what I want to see. If I can consolidate the lookup table, it will fix my issue, but I can't figure out how to do it. The table currently looks like this: Org Branch Role Name Org A Branch 1 President Jack Org A Branch 1 VP Jill Org A Branch 1 Manager Mary Org A Branch 2 President Hansel Org A Branch 2 VP Gretel Org A Branch 3 VP Mickey Org A Branch 3 Manager Minnie   I use the Org and Branch as matching criteria and want to pull out the names for each role.  I do not want to see multivalue fields when I am done, the current search looks like: [base search] | lookup orgchart Org Branch OUTPUTNEW Role | mvexpand Role | lookup orgchart Org Branch Role OUTPUTNEW Name This works, but the mvexpand (obviously) creates a new line for each role and I do not want multiple lines for each in my final results.  I want a single line for every Org/Branch pair showing all the Roles and names.  I am thinking the way of solving this is reformatting the lookup table to look like the table below, then modifying my lookup.  Is there a way to "transpose" just the 2 fields?  [base search] | lookup orgchart Org Branch OUTPUTNEW President, VP, Manager Org Branch President VP Manager Org A Branch 1 Jack Jill Mary Org A Branch 2 Hansel Gretel   Org A Branch 3   MIckey Minnie     Thank you!
Hi Splunkers, I would like to export logs (raw/csv) out of Splunk cloud periodically to send it to gcp pub/sub. How can I achieve this. Appreciate ideas here.
Hello, I am using  an extract field at search time called "src_ip". To optimize search response times, I have create an indexed field extraction called "src_ip-index". How to "backendly" configure... See more...
Hello, I am using  an extract field at search time called "src_ip". To optimize search response times, I have create an indexed field extraction called "src_ip-index". How to "backendly" configure Splunk so end users will query only a single field which use both "src_ip-index" and "src_ip" , but use "src_ip-index" in priority when available due to better performance. hope it is clear enough. Best regards,
Hello I want to extract the field issrDsclsrReqId" using the Rex command.  Can someone please help me with the command to extract the value of field bizMsgIdr  which is eiifr00000522922023122916222... See more...
Hello I want to extract the field issrDsclsrReqId" using the Rex command.  Can someone please help me with the command to extract the value of field bizMsgIdr  which is eiifr000005229220231229162227.    { "shrhldrsIdDsclsrRspn": { "dsclsrRspnId": "0000537ede1c5e1084490000aa7eefab", "issrDsclsrReqRef": { "issrDsclsrReqId": "eiifr000005229220231229162227", "finInstrmId": { "isin": "FR0000052292" }, "shrhldrsDsclsrRcrdDt": { "dt": { "dt": "2023-12-29" } } }, "pgntn": { "lastPgInd": true, "pgNb": "1" }, "rspndgIntrmy": { "ctctPrsn": { "emailAdr": "ipb.asset.servicing@bnpparibas.com", "nm": "IPB ASSET SERVICING" }, "id": { "anyBIC": "BNPAGB22PBG" }, "nmAndAdr": { "adr": { "adrTp": 0, "bldgNb": "10", "ctry": "GB", "ctrySubDvsn": "LONDON", "pstCd": "NW16AA", "strtNm": "HAREWOOD AVENUE", "twnNm": "LONDON" }, "nm": "BNP PARIBAS PRIME BROKERAGE" } } } }
Hi Splunkers, I must recover Splunk version for all component in a particular environment. I have not access to all GUI and/or .conf files on all machine, so the idea is to try to recover those info... See more...
Hi Splunkers, I must recover Splunk version for all component in a particular environment. I have not access to all GUI and/or .conf files on all machine, so the idea is to try to recover those info with a Splunk search. Here: How-to-identify-a-list-of-forwarders-sending-data I got a very useful search that I used and return me a lot of info about Forwarders, all ones: UF, HF and so on. Due I'm not on a cloud env but an on prem one, I have also to recover Splunk version used on Indexers and Search Heads. So, my question is: how should I change search got on above link to gain version from IDXs and SHs?
I am wondering why the two following requests, when applied to exactly the same time range, return a different value: index=<my_index> logid=0000000013 | stats count index=<my_index> logid=13 | st... See more...
I am wondering why the two following requests, when applied to exactly the same time range, return a different value: index=<my_index> logid=0000000013 | stats count index=<my_index> logid=13 | stats count The first one returns many more results than the second. (The type indicated by Splunk for this field is "number" not "string".)
I have been struggling to create a dynamic dropdown in Splunk Dashboard studio. I have watched several video but I think they mostly talk about Classic Dashboards. I have also read the documentation ... See more...
I have been struggling to create a dynamic dropdown in Splunk Dashboard studio. I have watched several video but I think they mostly talk about Classic Dashboards. I have also read the documentation but it has been of no help. My Sample Problem is: A: B,C,D W: X,Y,Z I want to create two dropdowns. Dropdown1: A, W Dropdown 2:  If "A", then "B","C,"D" options If "B", then "X","Y,"Z" options I am unable to figure out how to do this. Any help will be much appreciated. Thank you all.  
Hi, Could any one pls figure out from these below logs to achieve the use case like when we launch rdp,proxy from secretserver, we are seeing some drop in the connection eg. like look for error and ... See more...
Hi, Could any one pls figure out from these below logs to achieve the use case like when we launch rdp,proxy from secretserver, we are seeing some drop in the connection eg. like look for error and handshake in logs sample event for client 2024-01-12 05:03:37,391 [CID:] [C:] [TID:197] ERROR Thycotic.RDPProxy.CLI.Session.ProxyConnection - Error encountered in RDP handshake for client 192.168.1.1 - (null) System.Exception: Assertion violated: stream.ReadByteInto(bufferStream) == 0x03 at Thycotic.RDPProxy.ContractSlim.Assert(Boolean condition, String conditionStr, String actualStr) at Thycotic.RDPProxy.Readers.ConnectionRequestProvider.ReadConnectionRequest(Stream stream, AuthenticationState clientState) at Thycotic.RDPProxy.CLI.Session.ProxyConnection.<DoHandshakeAndForward>d__20.MoveNext() sample event for user 2024-01-12 05:02:11,920 [CID:] [C:] [TID:266] ERROR Thycotic.DE.Feature.SS.RdpProxy.EngineRdpProxySessionService - An error was encountered while attempt to fetch proxy credentials for user 'chrisbronet' - (null) another usecase is like the discovery process from ad to secretserver eg, scan ad and finds the local id creates the id and pwd in to the secret server. sample events: 1) 2024-01-11 23:39:36,183 [CID:] [C:] [TID:83] ERROR Thycotic.Discovery.Sources.Scanners.Dependency.ApplicationPoolScanner - WMI (IIS) Unable to connect to xyzwin.abc.com with Exception System.Threading.ThreadAbortException: Thread was being aborted. at System.Management.IEnumWbemClassObject.Next_(Int32 lTimeout, UInt32 uCount, IWbemClassObject_DoNotMarshal[] apObjects, UInt32& puReturned) at System.Management.ManagementObjectCollection.ManagementObjectEnumerator.MoveNext() at Thycotic.Discovery.Sources.Scanners.Dependency.ApplicationPoolScanner.<>c__DisplayClass10_0.<IsIisRunningWmi>b__0(Object x) - (null) 2) 2024-01-11 23:29:47,675 [CID:] [C:] [TID:PriorityScheduler Thread @ Normal] ERROR Thycotic.Discovery.Sources.Scanners.MachinePreDiscoveryTester - Could not connect to xyx.win.abc.com with port pre-check. Please open port(s) [135, 445] - (null) 3) 2024-01-11 23:32:32,163 [CID:] [C:] [TID:PriorityScheduler Elastic Thread @ Normal] ERROR Thycotic.Discovery.Sources.Scanners.Dependency.ApplicationPoolScanner - Service Controller (IIS) Unable to connect to xyz.win.abc.com with Exception System.InvalidOperationException: Cannot open W3SVC service on computer 'xyz.win.abc.com'. ---> System.ComponentModel.Win32Exception: Access is denied --- End of inner exception stack trace --- at System.ServiceProcess.ServiceController.GetServiceHandle(Int32 desiredAccess) ... 1 line omitted ... at System.ServiceProcess.ServiceController.get_Status() at Thycotic.Discovery.Sources.Scanners.Dependency.ApplicationPoolScanner.IsIisRunningServiceController() - (null) Thankyou  
Dears, Need assistance with a Splunk query to retrieve data from two sources: source X and source Y. I want to match records where child_file_id in source Y matches file_id in source X and retrieve ... See more...
Dears, Need assistance with a Splunk query to retrieve data from two sources: source X and source Y. I want to match records where child_file_id in source Y matches file_id in source X and retrieve the combined data. How can I achieve this?   So, in my source X, specifically Stealer_* there are records, each of which includes a file_id, which is illustrated as 3382 in my example.     So, when I search for file_id, I find 6 events, all structured similarly but with different values, all sharing the same file_id. In another source, I have data related to source X. To establish connections between them, I use child_file_id as a relational identifier, similar to a database key. As depicted in the screenshot below, you can see that the child_file_id corresponds to the same file_id in the first source."         How can I construct a Splunk query to achieve this? Specifically, I want to retrieve the entire result set in a single query and table. In this query, the data from source 2 (child_file_id) should be duplicated in each event from the first source, creating a unified result.   Final output  something like this  srouce_field1,srouce_field1,srouce_field1,srouce_field1,srouce_field1,srouce_field2,srouce_field2 BR.
Hello i need your help,   i did a free trial 14 days for splunk, about a hour ago. If i want so access instance, it isnt even accessable, like gray-mode. Should i just wait or did i something wro... See more...
Hello i need your help,   i did a free trial 14 days for splunk, about a hour ago. If i want so access instance, it isnt even accessable, like gray-mode. Should i just wait or did i something wrong?   Thanks for your help
How to find endpoints of our Splunk instance 
How Send an alert if one event doesn't occur in 10 min with below format data. The data will send every 1 hour with 30mins interval. example:  alert has trigger for the below data is 2:40 _... See more...
How Send an alert if one event doesn't occur in 10 min with below format data. The data will send every 1 hour with 30mins interval. example:  alert has trigger for the below data is 2:40 _time ID Bill_ID 2024-01-12T03:10:53.000-06:00 TTF5 80124 2024-01-12T03:08:07.000-06:00 TFB6 84958       2024-01-12T02:34:54.000-06:00 TFB6 84958 2024-01-12T02:09:48.000-06:00 TTF5 80124 2024-01-12T02:07:02.000-06:00 TFB6 84958 2024-01-12T01:36:59.000-06:00 TTF5 80124 2024-01-12T01:33:37.000-06:00 TFB6 84958 2024-01-12T01:11:13.000-06:00 TTF5 80124 2024-01-12T01:07:22.000-06:00 TFB6 84958 2024-01-12T00:37:08.000-06:00 TTF5 80124 2024-01-12T00:35:08.000-06:00 TFB6 84958 2024-01-12T00:11:16.000-06:00 TTF5 80124 2024-01-12T00:10:20.000-06:00 TFB6 84958 2024-01-11T23:36:19.000-06:00 TTF5 80124 2024-01-11T23:34:17.000-06:00 TFB6 84958
Hello, I am using a Filler Gauge in one of my dashboards and I would like to use values with 2 decimal values, but I do not see any precision option for Gauge Viz.  for example, I would like to ... See more...
Hello, I am using a Filler Gauge in one of my dashboards and I would like to use values with 2 decimal values, but I do not see any precision option for Gauge Viz.  for example, I would like to display this as 99.60 and not 100. Is it not possible to do at the moment in dashboard studio or is there any workaround available to achieve this? Thank you.  
"reqUser":"mhundi","evtTime":"2023-06-08 14:04:06.504","access":"SELECT","resource":"dsc60180_ici_sde_tz_db/vehicle_master/light_truck_lob_flag,lincoln_lob_flag,model_e_lob_flag,vehicle_make_desc,veh... See more...
"reqUser":"mhundi","evtTime":"2023-06-08 14:04:06.504","access":"SELECT","resource":"dsc60180_ici_sde_tz_db/vehicle_master/light_truck_lob_flag,lincoln_lob_flag,model_e_lob_flag,vehicle_make_desc,vehicle_type_desc,warranty_start_date,vehicle_type_desc,warranty_start_date","resType":"@column","action":"select","result":1,"agent":"hiveServer2","policy":101343,"enforcer":"ranger-acl","sess":"00ef27f9-75a4-4821-9e8a-60f16af6b962","cliType":"HIVESERVER2","cliIP":"19.51.78.185","reqData":"SELECT * FROM (SELECT `Left`.`advisor_name`, `Left`.`appointment_created_by`, `Left`.`appointment_datetime   Fields to be extract  reqUser, evtTime, resource    
Hi all, I have list of 3k+ servers for which i want to check data flow from specific index. How can i do this with optimize search
This page states:  You can't delete default indexes and third-party indexes from the Indexes page.    Can I still delete default indexes through the CLI?  
Is it possible to run a playbook on demand, meaning a manual trigger by an analyst such as clicking a playbook during a workbook step? I have a use case where I want to run a playbook, but only from ... See more...
Is it possible to run a playbook on demand, meaning a manual trigger by an analyst such as clicking a playbook during a workbook step? I have a use case where I want to run a playbook, but only from user initiation. I could implement some logic for user interaction at the container, but I'd prefer not to have something waiting for input until a user can get to it.
When a container is created that contains multiple artifacts from a forwarded Splunk event, I noticed playbooks are running against every artifact that has been added, causing duplicate actions. R... See more...
When a container is created that contains multiple artifacts from a forwarded Splunk event, I noticed playbooks are running against every artifact that has been added, causing duplicate actions. Reading through the boards here a bit a possible solution was adding logic to check for a container tag on run. Use a decision block to see if a tag exists, if so simply end, otherwise continue and add a tag when complete. My problem is this appears to work when testing against existing containers (debug against existing container ID and all artifacts), but when a new container is created it seems to ignore this and run multiple times. My guess is the playbook is being run concurrently for each of the artifacts instead of one at a time. 1. What is causing the problem? 2. What is best practice to prevent this from occurring?
i see the splunk query  index="sample" "log_processed.env"=prod "log_processed.app"=sample "log_processed.traceId"=90cf115a05ebb87b2 | table _time, log_processed.message this is displaying the e... See more...
i see the splunk query  index="sample" "log_processed.env"=prod "log_processed.app"=sample "log_processed.traceId"=90cf115a05ebb87b2 | table _time, log_processed.message this is displaying the empty messages in a table cell . i could the event in the raw format. do i have any limit to see the whole message in table box .