All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hey, thank you very much for this query! I've decided to go with yours out of the 2 responses here as it displays just the one host in the end instead of all of them which will be nicer at a glance. ... See more...
Hey, thank you very much for this query! I've decided to go with yours out of the 2 responses here as it displays just the one host in the end instead of all of them which will be nicer at a glance.   You made it seem very simple and I appreciate that, I have a lot to learn!
Hey, I certainly agree that ChatGPT isn't the best place to learn, but it comes in handy sometimes. I need to start taking some actual training though.   Your solution did work, so thank you for sh... See more...
Hey, I certainly agree that ChatGPT isn't the best place to learn, but it comes in handy sometimes. I need to start taking some actual training though.   Your solution did work, so thank you for sharing it with me. I did then go and use GPT to help explain the details to me and I think I understand it all so that's nice. Setting sources to different values and comparing them that way is neat and I'm glad I've seen that now. 
I know for configuring all the audit events related to Github Enterprise and Github Organization from GitHub Cloud via modinputwith Account Type = Organization. the Personal Access Token also needs ... See more...
I know for configuring all the audit events related to Github Enterprise and Github Organization from GitHub Cloud via modinputwith Account Type = Organization. the Personal Access Token also needs the granted access - "admin:org".
I have a cyber security finding that states "The splunk service accepts connections encrypted using SSL 2.0 and/or SSL 3.0".  Of course SSL 2.0 and 3.0 are not secure protocols.  How do I disable SSL... See more...
I have a cyber security finding that states "The splunk service accepts connections encrypted using SSL 2.0 and/or SSL 3.0".  Of course SSL 2.0 and 3.0 are not secure protocols.  How do I disable SSL 2.0/3.0?  Can I just disable it in the browser or do I need to change a setting within splunk?
Need some assistance with creating a query where I am trying to capture the parent folder and the 1st child folder respectively from a print output log that has both windows and linux folder paths.  ... See more...
Need some assistance with creating a query where I am trying to capture the parent folder and the 1st child folder respectively from a print output log that has both windows and linux folder paths.  Sample data and folder paths I am trying to get in a capture group is in bold. _time,     username,      computer,      printer,      source_dir,      status 2024-09-24 15:32 ,   auser, cmp_auser,  print01_main1,   \\cpn-fs.local\data\program\...,          Printed 2024-09-24 13:57 ,   buser, cmp_buser,  print01_offic1,   c:\program files\documents\...,            Printed 2024-09-24 12:13 ,   cuser, cmp_cuser,  print01_offic2,   \\cpn-fs.local\data\transfer\...,            In queue 2024-09-24 09:26,    buser, cmp_buser,  print01_offic1,   F:\transfers\program\...,                           Printed 2024-09-24 09:26,    buser, cmp_buser,  print01_front1,   \\cpn-fs.local\transfer\program\...,  Printed 2024-09-24 07:19,    auser, cmp_auser,   print01_main1,   \\cpn-fs.local\data\program\....,         In queue I am currently using a Splunk query where I call these folders in my initial search, but I want to control this using a rex command so I can add an eval command to see if they were printed locally or from a server folder.  Current query is: index=printLog  source_dir IN ("\\\\cpn-fs.local\data\*", "\\\\cpn-fs.local\transfer\*",  "c:\\program files\\*", " F:\\transfer\\*" )  status== "Printed" | table status, _time, username, computer, printer, source_dir I tried using the following rex but didn't get any return:      | rex field=source_dir "(?i)<FolderPath>(?i[A-Z][a-z]\:|\\\\{1})[^\\\\]+)\\\\[^\\\\]+\\\\)" In my second effort, through Splunk I generated these two regex using the field extractor respectively.  I know I need to pipe them to add the "OR" operator when comparing the windows and Linux paths but I get an error when trying to combine them. Regex generated from windows:  c:\program files  ^[^ \n]* \w+,,,(?P<FolderPath>\w+:\\\w+) Regex generated from linux: \\cpn-fs.local\data ^[^ \n]* \w+,,,(?P<FolderPath>\\\\\w+\-\w+\d+\.\w+\.\w+\\\w+) To start, I am looking for an output which should look like what is seen below to replace the "source_dir" with the rex "FolderPath"  created _time,     username,      computer,      printer,      FolderPath,      file,    status 2024-09-24 15:32 ,   auser, cmp_auser,  print01_main1,   \\cpn-fs.local\data\,    Printed 2024-09-24 13:57 ,   buser, cmp_buser,  print01_offic1,   c:\program files\,            Printed Thanks for any help given.
I believe you are hitting the limit of extraction_cutoff for spath, maybe. If this is not working, try to set up KV_MODE=JSON on the search head as an alternative for that particular source or source... See more...
I believe you are hitting the limit of extraction_cutoff for spath, maybe. If this is not working, try to set up KV_MODE=JSON on the search head as an alternative for that particular source or sourcetype if you don't have a lot of data coming in. https://docs.splunk.com/Documentation/Splunk/latest/Admin/limitsconf extraction_cutoff = <integer> * For 'extract-all' spath extraction mode, this setting applies extraction only to the first <integer> number of bytes. This setting applies both the auto kv extraction and the spath command, when explicitly extracting fields. * Default: 5000  
@PickleRick , thanks for responding. 1. I just posted a sample for each of the indexes as a reply to Tred_splunk's question. Can you please check and see if that makes it clear? - https://communit... See more...
@PickleRick , thanks for responding. 1. I just posted a sample for each of the indexes as a reply to Tred_splunk's question. Can you please check and see if that makes it clear? - https://community.splunk.com/t5/Splunk-Search/How-to-join-search-results-from-two-indexes-based-on-multiple/m-p/700245/highlight/true#M237645 2. stats based search is good and I will consider your suggestion of adding only the necessary fields. However, this query is incomplete (in the sense that I am able to correlate only 1 event from index_2 to index_1 but not the other event) 3. The initial thought of renaming was to provide the distinction between two events from the same index (index_2) by identifying them as "current" and "previous" I hope I was able to clarify. Thanks
Sample form index_1 { "index1Id": "Id_1", "currEventId": "EventId_1", "prevEventId": "EventId_2" }   EventId_1 from index_2 { "eventId": "EventId_1", "eventOrigin": "EventOrigin_1", }   Ev... See more...
Sample form index_1 { "index1Id": "Id_1", "currEventId": "EventId_1", "prevEventId": "EventId_2" }   EventId_1 from index_2 { "eventId": "EventId_1", "eventOrigin": "EventOrigin_1", }   EventId_2 from index_2 { "eventId": "EventId_2", "eventOrigin": "EventOrigin_2", }   The final result I am looking for, after the search  index1Id prevEventId prevEventOrigin currEventId currEventOrigin Id_1 EventId_2 EventOrigin_2 EventId_1 EventOrigin_1   Thanks @tread_splunk 
Sure np, I have untagged you and wont tag going forward
@kranthimutyala2 I am a volunteer here, as are most of those providing answers. Please don't tag me in your posts. If I have time and have something to contribute, I will try to help. But I will choo... See more...
@kranthimutyala2 I am a volunteer here, as are most of those providing answers. Please don't tag me in your posts. If I have time and have something to contribute, I will try to help. But I will choose which posts to answer and when.
What is the data source for that table? The JSON you have shared does not appear to cover that
| untable Component Level count | eval Component_Level=Component."_".Level | table Component_Level count | transpose 0 header_field=Component_Level | fields - column
This is the problem, I don't how this works... but I want to use the data that appears on the table in the bottom:   
Which search are you trying to extend - if it is "mttrSearch", you would do something like this "dataSources": { "dsQueryCounterSearch1": { "options": { "exte... See more...
Which search are you trying to extend - if it is "mttrSearch", you would do something like this "dataSources": { "dsQueryCounterSearch1": { "options": { "extend": "mttrSearch", "query": "| where AlertSource = AWS and AlertSeverity IN (6,5,4,3,1) | dedup Identifier | stats count as AWS", "queryParameters": { "earliest": "$earliest_time$", "latest": "$latest_time$" } }, "type": "ds.search" },
Hi Splunk, I have a table like below Component Green Amber Red Resp_time 0 200 400 5xx 0 50 100 4xx 0 50 100   I want to combine them to produce single row like below Resp_time_Green  Resp_ti... See more...
Hi Splunk, I have a table like below Component Green Amber Red Resp_time 0 200 400 5xx 0 50 100 4xx 0 50 100   I want to combine them to produce single row like below Resp_time_Green  Resp_time_Amber Resp_time_Red 5xx_Green 5xx_Amber 5xx_Red 4xx_Green 4xx_Amber 4xx_Red 0 200 400 0 50 100 0 50 100
Since you are using joins, you could be hitting limits on the subsearches - have you tried a shorter timeframe?
Hi Team, I have the below JSON field in the splunk event [{"sourceAccountId":"sourceAccountId_1","Remarks":"Successfully Migrated","recordStatus":"Success","RecordID":"RecordID_1","destinationAc... See more...
Hi Team, I have the below JSON field in the splunk event [{"sourceAccountId":"sourceAccountId_1","Remarks":"Successfully Migrated","recordStatus":"Success","RecordID":"RecordID_1","destinationAccountId":"destinationAccountId_1","defaultOwnerId":"defaultOwnerId_1"},{"sourceAccountId":"sourceAccountId_1","Remarks":"Successfully Migrated","recordStatus":"Success","RecordID":"RecordID_2","destinationAccountId":"destinationAccountId_1","defaultOwnerId":"defaultOwnerId_1"},{"sourceAccountId":"sourceAccountId_1","Remarks":"Successfully Migrated","recordStatus":"Success","RecordID":"RecordID_3","destinationAccountId":"destinationAccountId_1","defaultOwnerId":"defaultOwnerId_1"}]  just for eg: I have added 3 entries but In real we have more than 200 records in the single event in this field When im using spath to extract this data its giving blank results, the same data when tested with fewer records (<10) its able to extract all the key value pairs, is there a better way to extract from large event data ?? Please help me with the SPL query.Thanks  @yuanliu @gcusello 
Looks like there was some invisible junk  character(s) in the code. I got it working...Thanks for your help.
Could you provide some sample (dummy) events from both index?
My background is network engineering so I can't speak to any specific software processing benefits of HTTP vs HTTPS.  However, since HTTP is essentially plain text that would be fairly simple to take... See more...
My background is network engineering so I can't speak to any specific software processing benefits of HTTP vs HTTPS.  However, since HTTP is essentially plain text that would be fairly simple to take the packet off the wire.  Having to decrypt HTTPS would by the very nature of an additional step add processing requirements but as pointed out by others depending upon the compute power of your server(s) there usually isn't a noticeable hit or queuing of data.  Most systems today have compute that will outperform the physical network connection.