All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Yim  Are you trying to extract it from the field or raw data ?  Please send me the sample data and elaborate it what you are trying to achieve as an output. 
Start diagnosis with this: | tstats count where index=* by index Is "myindex" in the list?
The problem here is unclear requirement: What is the logic to collapse the three rows after dedup into that single row? As @gcusello speculates, the three rows have common values of identity.  Is t... See more...
The problem here is unclear requirement: What is the logic to collapse the three rows after dedup into that single row? As @gcusello speculates, the three rows have common values of identity.  Is this correct? Such should be be stated explicitly. The mock data also shows identical first and last for the three rows.  Is this always true?  Such should be stated explicitly, too. More intricately, the mock data contains different values of extensionAttribute11 and extensionAttribute10.  What are the criteria of choosing one or another from these differing values in the collapsed table?  Volunteers here cannot read minds. extensionAttribute10 in one of the three rows is blank; that in the rest rows is the same value.  One can reasonably speculate that you want the non-blank value to be used in the collapsed table.  But is this speculation correct?  Are all non-blank values identical?  Again, do not make volunteers read your mind. Additionally, what is the logic to determine which value remains with field name email, which goes to email2, email3, etc.? In the following example, I'll take arbitrary selection among emails (5), take every value of extensionAttribute11 (3), and take affirmative in (4).  You get email extensionAtttribute10 extensionAttribute11 first last identity email2 email3 user@domain.com user@domain.com user@consultant.com user@domain.com User Surname USurname userT0@domain.com userT1@domain.com This the search   index=collect_identities sourcetype=ldap:query user | stats values(*) as * by first last identity | eval idx = mvrange(1, mvcount(email)) | eval json = json_object() | foreach idx mode=multivalue [eval ordinal = <<ITEM>> + 1, json = json_set(json, "email" . ordinal, mvindex(email, <<ITEM>>))] | spath input=json | eval email = mvindex(email, 0) | table email extension* first last identity email*   (Of course, you can reduce extensionAttribute11 to one value if you know the logic.)  Here is an emulation. Play with it and compare with real data.   | makeresults format=csv data="email, extensionAttribute10, extensionAttribute11, first, last, identity user@domain.com, , user@consultant.com, User, Surname, USurname userT1@domain.com, user@domain.com, user@domain.com, User, Surname, USurname userT0@domain.com, user@domain.com, user@domain.com, User, Surname, USurname" ``` the above emulates index=collect_identities sourcetype=ldap:query user ```    
If you mean sending to two output groups from a single forwarder - that works until one of them gets blocked. Then both stop. It's by design.
Is this table what you are looking for? sn_vul_detection sn_vul_vulnerable_item 2233 2000 Here is a quick cheat:   | rex mode=sed "s/:\s*(\d+)\n/=\1\n/g" | extract | stats sum(sn_vul_*... See more...
Is this table what you are looking for? sn_vul_detection sn_vul_vulnerable_item 2233 2000 Here is a quick cheat:   | rex mode=sed "s/:\s*(\d+)\n/=\1\n/g" | extract | stats sum(sn_vul_*) as sn_vul_*   If you must have that colon-separated notation, add   | foreach * [eval notation = mvappend(notation, "<<FIELD>>: " . <<FIELD>>)]   Here is an emulation of your sample data.  Play with it and compare with real data   | makeresults | eval data = mvappend("2024-10-29 20:14:49 (715) worker.6 worker.6 txid=XXXX JobPersistence Total records archived per table: sn_vul_vulnerable_item: 1000 sn_vul_detection: 1167 Total records archived: 2167 Total related records archived: 1167", "2024-10-29 20:13:17 (337) worker.0 worker.0 txid=YYYY JobPersistence Total records archived per table: sn_vul_vulnerable_item: 1000 sn_vul_detection: 1066 Total records archived: 2066 Total related records archived: 1066") | mvexpand data | rename data as _raw | eval _time = strptime(replace(_raw, "^(\S+ \S+).*", "\1"), "%F %T") ``` data emulation above ```  
After testing UF output cloning, it was found that it is impossible to achieve true same data distribution across multiple clusters! Is there any good solution for dual writing? most urgent!
After testing UF output cloning, it was found that it is impossible to achieve true same data distribution across multiple clusters! Is there any good solution for dual writing? most urgent!
i was able to achieve this using  return $search_ticket   Thanks.
hello, Is there any good solution to the problem of cloning into multiple groups and receiving copies of data from the indexer, but not necessarily with precision?
Hi Mario, Yes, mvn was not installed. We were able to successfully install the extension after mvn installation. But then we faced another issue, the metrics were not populating in AppD. We raised a... See more...
Hi Mario, Yes, mvn was not installed. We were able to successfully install the extension after mvn installation. But then we faced another issue, the metrics were not populating in AppD. We raised a support ticket for this. As per current update on the case, the EC2 instance in which the extension is installed uses IMDSv2. The extension does not support IMDSv2 and that is the potential reason for metrics. This detail was not mentioned anywhere in AppDynamics documentation. And we ran into this roadblock. Working with support team to get a work around. Regards Fadil
So, you are indirectly confirming that location information does not exist in index data.  Have you tried the search I gave above?
Hi, how did you extract the extension? Did you use maven to create the build which outputs the zip file for you to be extracted?
Hello everyone, I'm currently collecting logs from a Fortigate WAF using Syslog, but I've encountered an issue where, after running smoothly for a while, the Splunk Heavy Forwarder (HF) suddenly... See more...
Hello everyone, I'm currently collecting logs from a Fortigate WAF using Syslog, but I've encountered an issue where, after running smoothly for a while, the Splunk Heavy Forwarder (HF) suddenly stops receiving and forwarding the logs. The only way to resolve this is by restarting the HF, after which everything works fine again, but the problem eventually recurs. Could anyone advise on: Possible causes for this intermittent log collection issue Any specific configurations to keep the Syslog input stable Troubleshooting steps or recommended best practices to prevent having to restart the HF frequently Any insights or similar experiences would be much appreciated! Thank you!
Thanks for the reply. Sorry that's not what I want to achieve. My search spans over last 30 days - This will only make it look for the timespan > 7 and < 14 days.  I want Splunk to run this search... See more...
Thanks for the reply. Sorry that's not what I want to achieve. My search spans over last 30 days - This will only make it look for the timespan > 7 and < 14 days.  I want Splunk to run this search on the given Cron schedule not to change the search time span. 
In Our environment: Technologies observability --IBM Sterling File Gateway AppDynamics Agent--Java 23.12 Problem statement-- AppDynamics is not able to discover BT's. and IBM SFG vendor is not agr... See more...
In Our environment: Technologies observability --IBM Sterling File Gateway AppDynamics Agent--Java 23.12 Problem statement-- AppDynamics is not able to discover BT's. and IBM SFG vendor is not agree to share Class name and method name with Cisco tools. Can someone please support to discover the BT's for SFG. Appreciate for support here. 
Hello @ITWhisperer ,     Hope i have added more information, please let me know if i need to add any other info. Actual need is, I'm having a field where sometimes i will get empty value, When ... See more...
Hello @ITWhisperer ,     Hope i have added more information, please let me know if i need to add any other info. Actual need is, I'm having a field where sometimes i will get empty value, When i'm selecting All in input drodown the values can be anything, it can be empty as well but when we choose any specific value in input drodown, we don't need to consider empty values, so I planned to create 2 base searches, one is when we choose all in input drodown, other is when we choose any values apart from All in input drodown, Since when we are choosing any other values in input drodown,  we can use | where isnotnull(field_name) | head 10000 which is not needed when we are selecting all in inputdrodown, since the data volume is huge . thanks! thanks!
I have data like this in splunk search 2024-10-29 20:14:49 (715) worker.6 worker.6 txid=XXXX JobPersistence Total records archived per table: sn_vul_vulnerable_item: 1000 sn_vul_detection: 1167 T... See more...
I have data like this in splunk search 2024-10-29 20:14:49 (715) worker.6 worker.6 txid=XXXX JobPersistence Total records archived per table: sn_vul_vulnerable_item: 1000 sn_vul_detection: 1167 Total records archived: 2167 Total related records archived: 1167 2024-10-29 20:13:17 (337) worker.0 worker.0 txid=YYYY JobPersistence Total records archived per table: sn_vul_vulnerable_item: 1000 sn_vul_detection: 1066 Total records archived: 2066 Total related records archived: 1066   How can i prepare a table as below ? Basically prepare  a list of tables and sum of their counts between text "Total records archived per table:" and "Total records archived: " sn_vul_vulnerable_item:2000 sn_vul_detection:2233   This is what i have so far node=* "Total records archived per table" "Total related records archived:" | rex field=_raw "Total records archived per table ((?m)[^\r\n]+)(?<tc_table>\S+): (?<tc_archived_count>\d+) Total related records archived:"
Hello Splunkers,    I'm having a inputput dropdown field, when i'm selecting "*" in that input dropdown field, I need to pass base search 1 to all searches in dashboard, when I'm selecting any oth... See more...
Hello Splunkers,    I'm having a inputput dropdown field, when i'm selecting "*" in that input dropdown field, I need to pass base search 1 to all searches in dashboard, when I'm selecting any other values apart from "*". I need to pass base search 2 to all searches in dashboard. <form version="1.1"> <label>Clone sample</label> <search> <query> | makeresults | eval curTime=strftime(now(), "GMT%z") | eval curTime=substr(curTime,1,6) |rename curTime as current_time </query> <progress> <set token="time_token_now">$result.current_time$</set> </progress> </search> <search id="base_1"> <query> index=2343306 sourcetype=logs* | head 10000 | fields _time index Eventts IT _raw | fillnull value="N/A" </query> <earliest>$time_token.earliest$</earliest> <latest>$time_token.latest$</latest> </search> <search id="base_2"> <query> index=2343306 sourcetype=logs* | where isnotnull(CODE) | head 10000 | fields _time index Eventts IT CODE _raw | fillnull value="N/A" </query> <earliest>$time_token.earliest$</earliest> <latest>$time_token.latest$</latest> </search> <fieldset submitButton="false" autoRun="true"> <input type="radio" token="field1"> <label>field1</label> <choice value="All">All</choice> <choice value="M1">M1</choice> <choice value="A2">A2</choice> <change> <eval token="base_token">case("All"="field1", "base_1", "All"!="field1", "base_2")</eval> </change> </input> <input type="time" token="time_token" searchWhenChanged="true"> <label>Time Range</label> <default> <earliest>-60m@m</earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel> <table> <title>table</title> <search base="$base_token$"> <query>| table *</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> </table> </panel> </row> </form> I have tries passing token in input dropdown it dosent work, can you please help me in fixing this issue. Thanks!
You are so great!
Thanks for your reply! But if I use oneshot to upload the csv file, could it match the specific sourcetype I added in the props.conf?