All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

ty This command help me a lot
Is Rhel 8.2 Compatible with Splunk Agent? Is there any existing server RHEL 8.2 with Splunk Agent? Or new costing? Please advice. Thank you in advance.
The query needs to be in an intranet environment, so I can't provide it. I can describe the query result, the results I get from the query based on SPL are only events about 4769, but I again make a... See more...
The query needs to be in an intranet environment, so I can't provide it. I can describe the query result, the results I get from the query based on SPL are only events about 4769, but I again make a new query with the user field as a keyword in the query results, and I will find that there are three other Eventcode records for this user within 24 hours. So I require that I want the query result to be a user who has only events about 4769 and not three other Eventcodes within 24 hours.
Thank you
Hi In the example below, I clearly understand that the "hello world" will be updated in a Splunk event { "time": 1426279439, // epoch time "host": "localhost", "source": "random-data-ge... See more...
Hi In the example below, I clearly understand that the "hello world" will be updated in a Splunk event { "time": 1426279439, // epoch time "host": "localhost", "source": "random-data-generator", "sourcetype": "my_sample_data", "index": "main", "event": "Hello world!" } curl -H "Authorization: Splunk 12345678-1234-1234-1234-1234567890AB" https://localhost:8088/services/collector/event -d '{"event":"hello world"}' Now imagine that my json file contains many items like below { "time": 1426279439, // epoch time "host": "localhost", "source": "random-data-generator", "sourcetype": "my_sample_data", "index": "main", "event": "Hello world!" } { "time": 1426279538, // epoch time "host": "localhost", "source": "random-data-generator", "sourcetype": "my_sample_data", "index": "main", "event": "Hello eveybody!" } Is the curl command to use should be like this? curl -H "Authorization: Splunk 12345678-1234-1234-1234-1234567890AB" https://localhost:8088/services/collector/event -d '{"event":}'  Last question : instead using a prompt command to send the json logs in Splunk, is it possible to use a json script to do that? Or something else Is anybody has good examples of that? thanks
Hi @nareshkumarg, Did you find a solution to the above? If so, could you please let me know what you found?
You still need to explain your use case in Splunk.  As I said, I use CSV update regularly; in fact, my CSV files have a similar structure.  In my case, I have two timestamps of particular interest, "... See more...
You still need to explain your use case in Splunk.  As I said, I use CSV update regularly; in fact, my CSV files have a similar structure.  In my case, I have two timestamps of particular interest, "First Detected" and "Last Detected", both of them similar to "Date_Find" in your example.  But "Last Detected" changes in every scan.  So, I use this field as _time when I ingest. What do you use as _time?  Do you have a field that changes every time? If you do not select a field in the CSV as _time, Splunk will use the time of your upload as _time.  Will that serve your purpose? If there is no value of _time that make sense in your data, can you just use file name to determine which is the latest? (To exemplify, there are lots of data inconsistence in my CSV files.  So in some searches I simply rely on file name - which translates into source field.)
Hi @JWai28 ,   Assuming you have a time dropdown called "global_time" and a drop-down called "dd_span":   With values set up like this (note the '+' next to each value): You could try ... See more...
Hi @JWai28 ,   Assuming you have a time dropdown called "global_time" and a drop-down called "dd_span":   With values set up like this (note the '+' next to each value): You could try something like this in your search: index=main [| makeresults | eval earliest_epoch=strptime("$global_time.earliest$", "%Y-%m-%dT%H:%M:%S.%3QZ") | eval earliest_relative=relative_time(now(),"$global_time.earliest$") | eval earliest = coalesce(earliest_epoch, earliest_relative) | eval latest=relative_time(earliest, "$dd_span$") | table earliest_epoch, earliest, latest | return earliest, latest] Breaking this down: This converts the global_time.earliest token into an epoch time (The dashboard either supplies a date string, or something like "-15m" - both cases are catered for). Then it creates the latest token based on the earliest time. Finally, it returns those values to the main search - setting the earliest and latest values.   It's a bit ugly, but should do the job.   Cheers, Daniel
I did , but no solution receive , Can u help me pls :  https://community.splunk.com/t5/Splunk-Search/Error-Search/m-p/665820#M228449
Hi @lanning_bradley, If you are able to create a React based dashboard, you can us this  https://splunkui.splunk.com/Packages/react-ui/StepBar There's a tutorial to help get you up and running... See more...
Hi @lanning_bradley, If you are able to create a React based dashboard, you can us this  https://splunkui.splunk.com/Packages/react-ui/StepBar There's a tutorial to help get you up and running with Splunk UI Toolkit  here: https://splunkui.splunk.com/Create/ComponentTutorial   Cheers, Daniel
Hi @DaveBunn, I wrote the Word Tree Viz and as cool as it is... I don't think it will give you what you need in this case. However, you may be interested in the Treeview Viz. Here's some SPL to cr... See more...
Hi @DaveBunn, I wrote the Word Tree Viz and as cool as it is... I don't think it will give you what you need in this case. However, you may be interested in the Treeview Viz. Here's some SPL to create a treeview representation of the hierarchy: |makeresults | eval raw = "Name=\"Dave Bunn \", OrgPosition=\"12345_Dave_Bunn \",Manager=\"1230_Mrs_Bunn\",MiscDetails=\"some text about my job\"@@@ Name=\"Mrs Bunn\",OrgPosition=\"1230_Mrs_Bunn\",Manager=\"10_The_Big_Boss\",MiscDetails=\"some text about Mrs Bunns job\"@@@ Name=\"Big Boss\",OrgPosition=\"10_The_Big_Boss\",Manager=\"0_The_Director\",MiscDetails=\"Manager of HR\"" | makemv raw delim="@@@" | mvexpand raw | rename raw as _raw | fields _raw | extract | fields - _time, _raw ``` Above: Creating the test data ``` | rename OrgPosition as id, Manager as parentid, Name as label ``` This bit is to fix any managers that don't appear in the data - i.e. 0_The_Director``` | appendpipe[|stats count by parentid| eval label=parentid, id=parentid | table label, id] ``` Reverse so the appendpipe appears first for the visualisation``` | reverse | eval iconDoc="user-circle", iconFolderOpen="users" | eval color=if(label="Dave Bunn","#DC4E41",null())   And here's what it looks like: Alternatively, you could use the Network Diagram Viz: SPL looks like this: |makeresults | eval raw = "Name=\"Dave Bunn \", OrgPosition=\"12345_Dave_Bunn \",Manager=\"1230_Mrs_Bunn\",MiscDetails=\"some text about my job\"@@@ Name=\"Mrs Bunn\",OrgPosition=\"1230_Mrs_Bunn\",Manager=\"10_The_Big_Boss\",MiscDetails=\"some text about Mrs Bunns job\"@@@ Name=\"Big Boss\",OrgPosition=\"10_The_Big_Boss\",Manager=\"0_The_Director\",MiscDetails=\"Manager of HR\"" | makemv raw delim="@@@" | mvexpand raw | rename raw as _raw | fields _raw | extract | fields - _time, _raw ``` Above: Creating the test data ``` | appendpipe[| stats count by Manager | eval type="user", nodeText=Manager, from=Manager | table from, nodeText, type] | appendpipe[| stats count by Name, OrgPosition | eval type="user", from=OrgPosition, nodeText=Name | table from, nodeText, type] | appendpipe[| stats count by OrgPosition, Manager | eval from=Manager, to=OrgPosition | table from, to] | eval color=if(nodeText="Dave Bunn","red",null()) | table from, to, nodeText, color, type | search from=*   When choosing a hierarchal view, that gives you this:  It will look a bit more impressive when there are more people and roles listed.   Hopefully those two visualisations give you something to work from.   Cheers, Daniel    
Hi @Thulasiraman , Here's one way to create a table using some of Splunk's built-in JSON commands. |makeresults | eval json="{ \"Group10\": { \"owner\": \"Abishek Kasetty\", \"fail\": 2, \"total\":... See more...
Hi @Thulasiraman , Here's one way to create a table using some of Splunk's built-in JSON commands. |makeresults | eval json="{ \"Group10\": { \"owner\": \"Abishek Kasetty\", \"fail\": 2, \"total\": 12, \"agile_team\": \"Punchout_ReRun\", \"test\": \"\", \"pass\": 6, \"report\": \"\", \"executed_on\": \"Mon Oct 23 03:10:48 EDT 2023\", \"skip\": 0, \"si_no\": \"10\" }, \"Group09\": { \"owner\": \"Lavanya Kavuru\", \"fail\": 45, \"total\": 190, \"agile_team\": \"Hawks_ReRun\", \"test\": \"\", \"pass\": 42, \"report\": \"\", \"executed_on\": \"Sun Oct 22 02:57:43 EDT 2023\", \"skip\": 0, \"si_no\": \"09\" }}" ``` Above is just to create the test data``` | eval keys = json_keys(json) | eval keys = json_array_to_mv(keys) | mvexpand keys | eval group = json_extract(json, keys) | fields - _time, json | spath input=group ``` Table out the fields you're interested in ``` | table agile_team, pass, fail   The search is doing the following: Get all the "GroupXX" keys (assuming these change each time you run the search) Convert the Group keys to a multivalue field MVExpand the keys so there's one per event Pull out each Group's values using the key Run spath to convert the JSON to fields Table out what you need   The output looks like this: Cheers, Daniel
Thanks for the suggestion, I do admit and agree that the easiest and best option at this point is to just take the syslog-ng route but I was trying to figure out how to do this natively Splunk if pos... See more...
Thanks for the suggestion, I do admit and agree that the easiest and best option at this point is to just take the syslog-ng route but I was trying to figure out how to do this natively Splunk if possible.  It does not seem like the IP wildcard works in a TCP/UDP stanza, at least not in my 9.X UF: ie this did not work: [udp://192.168.2.*:514] connection_host = ip index = checkpoint sourcetype = syslog   Additionally, I think I have figured out that the problem with using the acceptFrom as I originally showed was that Splunk will only process the first stanza of any particular PORT so there can't be a "fall back to the catchall" type logic if you are only using [udp://514] or [tcp://514]. You CAN have a generic port stanza and an IP specific stanza and that arrangement will be honored ie:  [udp://192.168.1.1:514] index=singularDeviceIndex [udp://514] index=catchallDeviceIndex   But I can't figure out how to make this work (or it just can't be done): [udp://514] acceptFrom=192.168.1.0/24 index=WhateversInThatSubnetOnly [udp://514] index=AnythingAndEverythingElse  
Hello, There would be no difference because I converted your suggestion to my real data, so I already fixed any details Please suggest. Thanks, Marius
My current Splunk infra setup is clustered for Search Heads and Indexers. and  we are using deployer and cluster master to manage configs for the respective SH and IDX. For example, can I manually pl... See more...
My current Splunk infra setup is clustered for Search Heads and Indexers. and  we are using deployer and cluster master to manage configs for the respective SH and IDX. For example, can I manually placed an updated config in SH1 and then run a rolling restart so they will sync/replicate with each other. ? This is in the event the Deployer is down. But eventually once the Deployer is up , we will place the updated config in Deployer. So that when we run a sync, it will not affect/remove the file from the SH cluster. Will there be any issues in this scenario?
I used the wrong case in the field name.  Try my edited answer.
Hello, I tried the command you suggested and it did not show any effects Please suggest. Thanks
so after opening a case with Splunk tech support because we were unable to upgrade in place our Windows 2019 Servers from Splunk version 9.0.0 to 9.1.1 we were instructed to backup the ETC directory ... See more...
so after opening a case with Splunk tech support because we were unable to upgrade in place our Windows 2019 Servers from Splunk version 9.0.0 to 9.1.1 we were instructed to backup the ETC directory than uninstall Splunk and do a new install of 9.1.1 then copy the old ETC directory back over  well we did just that except we also put it on new/different hardware and now we can't log in to Splunk Web, we get the login screen it takes our credentials and then we get the three dots of death ... any help / advice is tremendously appreciated         
@gcusello  Hi! Thank you for your advice! (1) It will be kind of difficult to list all 280  indexes. We can probably decrease it to 68 by using something like index=p* I was wondering if t... See more...
@gcusello  Hi! Thank you for your advice! (1) It will be kind of difficult to list all 280  indexes. We can probably decrease it to 68 by using something like index=p* I was wondering if there might be another alternative way to do it without listing all the indexes in search of in macro  (2) The rule is actually useful to us, since we had few issues with performance due to users using index=*  , selecting big time period and searching for some "text" through all of our 280+ indexes But just curious on why are you saying it isn't useful? Regards, @mlevsh 
@phanTom  thank you for all your help! I am familiar with the daemons names, I am trying to identify the relevant log from each <daemon>.log to fit my cases, let me be more specific: for example,... See more...
@phanTom  thank you for all your help! I am familiar with the daemons names, I am trying to identify the relevant log from each <daemon>.log to fit my cases, let me be more specific: for example, I think that thelog message for playbook running is "decided_command_handler_process_containers.cpp : 597 : DECIDED_CMD_PROCESS_CONTAINERS: rule <playbook id> on container <containet number> SUCCESS" or the log message for succussing ingestion is:  "connector_executor.cpp : 934 : INGESTDCommandProcessor::ExecuteConnector DONE. Outcome: success" could you help me identify the relevant cpp and number (597 and 934 are the examples I have pasted here) for each case I wrote above? I hope that I did not complicate the things too much..