All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Not sure I understand your examples, as you indicate the data is in a file, but you are not sending that file, only the data following the -d curl option. To send a file, you use -d @filename  
@SplunkSN  as @richgalloway says, the head command does not limit the columns/fields retrieved, it simply takes the first n results, so in your timechart case, it will return the earliest 15 rows of ... See more...
@SplunkSN  as @richgalloway says, the head command does not limit the columns/fields retrieved, it simply takes the first n results, so in your timechart case, it will return the earliest 15 rows of your timechart, so effectively 15 rows of 15 minute spans. if you want to control the highest count of the dest_domain, you can use a where clause in the timechart, like this | timechart span=15m count by dest_domain usenull=f useother=f where count in top10 which will show you the 10 dest_domain values that have the highest count.  
Thank you for feedback.
@somesoni2 still the stats command is raising the error while escaping the with \ error: The argument ''The argument ''https://abc.......?export\=download&id\=1HGFF5ziAFGn8161CKQC$Xyuhni9PNK_X'' is ... See more...
@somesoni2 still the stats command is raising the error while escaping the with \ error: The argument ''The argument ''https://abc.......?export\=download&id\=1HGFF5ziAFGn8161CKQC$Xyuhni9PNK_X'' is invalid."is invalid.
Hi @deephi ...  The most Linux systems should be compatible with Splunk Agent(Splunk "Universal Forwarder") (Linux, all 3.x and 4.x kernel versions x86 (64-bit),  Linux, 5.x kernel versions 5.4 an... See more...
Hi @deephi ...  The most Linux systems should be compatible with Splunk Agent(Splunk "Universal Forwarder") (Linux, all 3.x and 4.x kernel versions x86 (64-bit),  Linux, 5.x kernel versions 5.4 and higher x86 (64-bit)) https://docs.splunk.com/Documentation/Splunk/9.1.1/Installation/Systemrequirements   Red Hat Enterprise Linux 8.0 is distributed with the kernel version 4.18. 0-80.. So well within the support limit. thanks. 
ty This command help me a lot
Is Rhel 8.2 Compatible with Splunk Agent? Is there any existing server RHEL 8.2 with Splunk Agent? Or new costing? Please advice. Thank you in advance.
The query needs to be in an intranet environment, so I can't provide it. I can describe the query result, the results I get from the query based on SPL are only events about 4769, but I again make a... See more...
The query needs to be in an intranet environment, so I can't provide it. I can describe the query result, the results I get from the query based on SPL are only events about 4769, but I again make a new query with the user field as a keyword in the query results, and I will find that there are three other Eventcode records for this user within 24 hours. So I require that I want the query result to be a user who has only events about 4769 and not three other Eventcodes within 24 hours.
Thank you
Hi In the example below, I clearly understand that the "hello world" will be updated in a Splunk event { "time": 1426279439, // epoch time "host": "localhost", "source": "random-data-ge... See more...
Hi In the example below, I clearly understand that the "hello world" will be updated in a Splunk event { "time": 1426279439, // epoch time "host": "localhost", "source": "random-data-generator", "sourcetype": "my_sample_data", "index": "main", "event": "Hello world!" } curl -H "Authorization: Splunk 12345678-1234-1234-1234-1234567890AB" https://localhost:8088/services/collector/event -d '{"event":"hello world"}' Now imagine that my json file contains many items like below { "time": 1426279439, // epoch time "host": "localhost", "source": "random-data-generator", "sourcetype": "my_sample_data", "index": "main", "event": "Hello world!" } { "time": 1426279538, // epoch time "host": "localhost", "source": "random-data-generator", "sourcetype": "my_sample_data", "index": "main", "event": "Hello eveybody!" } Is the curl command to use should be like this? curl -H "Authorization: Splunk 12345678-1234-1234-1234-1234567890AB" https://localhost:8088/services/collector/event -d '{"event":}'  Last question : instead using a prompt command to send the json logs in Splunk, is it possible to use a json script to do that? Or something else Is anybody has good examples of that? thanks
Hi @nareshkumarg, Did you find a solution to the above? If so, could you please let me know what you found?
You still need to explain your use case in Splunk.  As I said, I use CSV update regularly; in fact, my CSV files have a similar structure.  In my case, I have two timestamps of particular interest, "... See more...
You still need to explain your use case in Splunk.  As I said, I use CSV update regularly; in fact, my CSV files have a similar structure.  In my case, I have two timestamps of particular interest, "First Detected" and "Last Detected", both of them similar to "Date_Find" in your example.  But "Last Detected" changes in every scan.  So, I use this field as _time when I ingest. What do you use as _time?  Do you have a field that changes every time? If you do not select a field in the CSV as _time, Splunk will use the time of your upload as _time.  Will that serve your purpose? If there is no value of _time that make sense in your data, can you just use file name to determine which is the latest? (To exemplify, there are lots of data inconsistence in my CSV files.  So in some searches I simply rely on file name - which translates into source field.)
Hi @JWai28 ,   Assuming you have a time dropdown called "global_time" and a drop-down called "dd_span":   With values set up like this (note the '+' next to each value): You could try ... See more...
Hi @JWai28 ,   Assuming you have a time dropdown called "global_time" and a drop-down called "dd_span":   With values set up like this (note the '+' next to each value): You could try something like this in your search: index=main [| makeresults | eval earliest_epoch=strptime("$global_time.earliest$", "%Y-%m-%dT%H:%M:%S.%3QZ") | eval earliest_relative=relative_time(now(),"$global_time.earliest$") | eval earliest = coalesce(earliest_epoch, earliest_relative) | eval latest=relative_time(earliest, "$dd_span$") | table earliest_epoch, earliest, latest | return earliest, latest] Breaking this down: This converts the global_time.earliest token into an epoch time (The dashboard either supplies a date string, or something like "-15m" - both cases are catered for). Then it creates the latest token based on the earliest time. Finally, it returns those values to the main search - setting the earliest and latest values.   It's a bit ugly, but should do the job.   Cheers, Daniel
I did , but no solution receive , Can u help me pls :  https://community.splunk.com/t5/Splunk-Search/Error-Search/m-p/665820#M228449
Hi @lanning_bradley, If you are able to create a React based dashboard, you can us this  https://splunkui.splunk.com/Packages/react-ui/StepBar There's a tutorial to help get you up and running... See more...
Hi @lanning_bradley, If you are able to create a React based dashboard, you can us this  https://splunkui.splunk.com/Packages/react-ui/StepBar There's a tutorial to help get you up and running with Splunk UI Toolkit  here: https://splunkui.splunk.com/Create/ComponentTutorial   Cheers, Daniel
Hi @DaveBunn, I wrote the Word Tree Viz and as cool as it is... I don't think it will give you what you need in this case. However, you may be interested in the Treeview Viz. Here's some SPL to cr... See more...
Hi @DaveBunn, I wrote the Word Tree Viz and as cool as it is... I don't think it will give you what you need in this case. However, you may be interested in the Treeview Viz. Here's some SPL to create a treeview representation of the hierarchy: |makeresults | eval raw = "Name=\"Dave Bunn \", OrgPosition=\"12345_Dave_Bunn \",Manager=\"1230_Mrs_Bunn\",MiscDetails=\"some text about my job\"@@@ Name=\"Mrs Bunn\",OrgPosition=\"1230_Mrs_Bunn\",Manager=\"10_The_Big_Boss\",MiscDetails=\"some text about Mrs Bunns job\"@@@ Name=\"Big Boss\",OrgPosition=\"10_The_Big_Boss\",Manager=\"0_The_Director\",MiscDetails=\"Manager of HR\"" | makemv raw delim="@@@" | mvexpand raw | rename raw as _raw | fields _raw | extract | fields - _time, _raw ``` Above: Creating the test data ``` | rename OrgPosition as id, Manager as parentid, Name as label ``` This bit is to fix any managers that don't appear in the data - i.e. 0_The_Director``` | appendpipe[|stats count by parentid| eval label=parentid, id=parentid | table label, id] ``` Reverse so the appendpipe appears first for the visualisation``` | reverse | eval iconDoc="user-circle", iconFolderOpen="users" | eval color=if(label="Dave Bunn","#DC4E41",null())   And here's what it looks like: Alternatively, you could use the Network Diagram Viz: SPL looks like this: |makeresults | eval raw = "Name=\"Dave Bunn \", OrgPosition=\"12345_Dave_Bunn \",Manager=\"1230_Mrs_Bunn\",MiscDetails=\"some text about my job\"@@@ Name=\"Mrs Bunn\",OrgPosition=\"1230_Mrs_Bunn\",Manager=\"10_The_Big_Boss\",MiscDetails=\"some text about Mrs Bunns job\"@@@ Name=\"Big Boss\",OrgPosition=\"10_The_Big_Boss\",Manager=\"0_The_Director\",MiscDetails=\"Manager of HR\"" | makemv raw delim="@@@" | mvexpand raw | rename raw as _raw | fields _raw | extract | fields - _time, _raw ``` Above: Creating the test data ``` | appendpipe[| stats count by Manager | eval type="user", nodeText=Manager, from=Manager | table from, nodeText, type] | appendpipe[| stats count by Name, OrgPosition | eval type="user", from=OrgPosition, nodeText=Name | table from, nodeText, type] | appendpipe[| stats count by OrgPosition, Manager | eval from=Manager, to=OrgPosition | table from, to] | eval color=if(nodeText="Dave Bunn","red",null()) | table from, to, nodeText, color, type | search from=*   When choosing a hierarchal view, that gives you this:  It will look a bit more impressive when there are more people and roles listed.   Hopefully those two visualisations give you something to work from.   Cheers, Daniel    
Hi @Thulasiraman , Here's one way to create a table using some of Splunk's built-in JSON commands. |makeresults | eval json="{ \"Group10\": { \"owner\": \"Abishek Kasetty\", \"fail\": 2, \"total\":... See more...
Hi @Thulasiraman , Here's one way to create a table using some of Splunk's built-in JSON commands. |makeresults | eval json="{ \"Group10\": { \"owner\": \"Abishek Kasetty\", \"fail\": 2, \"total\": 12, \"agile_team\": \"Punchout_ReRun\", \"test\": \"\", \"pass\": 6, \"report\": \"\", \"executed_on\": \"Mon Oct 23 03:10:48 EDT 2023\", \"skip\": 0, \"si_no\": \"10\" }, \"Group09\": { \"owner\": \"Lavanya Kavuru\", \"fail\": 45, \"total\": 190, \"agile_team\": \"Hawks_ReRun\", \"test\": \"\", \"pass\": 42, \"report\": \"\", \"executed_on\": \"Sun Oct 22 02:57:43 EDT 2023\", \"skip\": 0, \"si_no\": \"09\" }}" ``` Above is just to create the test data``` | eval keys = json_keys(json) | eval keys = json_array_to_mv(keys) | mvexpand keys | eval group = json_extract(json, keys) | fields - _time, json | spath input=group ``` Table out the fields you're interested in ``` | table agile_team, pass, fail   The search is doing the following: Get all the "GroupXX" keys (assuming these change each time you run the search) Convert the Group keys to a multivalue field MVExpand the keys so there's one per event Pull out each Group's values using the key Run spath to convert the JSON to fields Table out what you need   The output looks like this: Cheers, Daniel
Thanks for the suggestion, I do admit and agree that the easiest and best option at this point is to just take the syslog-ng route but I was trying to figure out how to do this natively Splunk if pos... See more...
Thanks for the suggestion, I do admit and agree that the easiest and best option at this point is to just take the syslog-ng route but I was trying to figure out how to do this natively Splunk if possible.  It does not seem like the IP wildcard works in a TCP/UDP stanza, at least not in my 9.X UF: ie this did not work: [udp://192.168.2.*:514] connection_host = ip index = checkpoint sourcetype = syslog   Additionally, I think I have figured out that the problem with using the acceptFrom as I originally showed was that Splunk will only process the first stanza of any particular PORT so there can't be a "fall back to the catchall" type logic if you are only using [udp://514] or [tcp://514]. You CAN have a generic port stanza and an IP specific stanza and that arrangement will be honored ie:  [udp://192.168.1.1:514] index=singularDeviceIndex [udp://514] index=catchallDeviceIndex   But I can't figure out how to make this work (or it just can't be done): [udp://514] acceptFrom=192.168.1.0/24 index=WhateversInThatSubnetOnly [udp://514] index=AnythingAndEverythingElse  
Hello, There would be no difference because I converted your suggestion to my real data, so I already fixed any details Please suggest. Thanks, Marius
My current Splunk infra setup is clustered for Search Heads and Indexers. and  we are using deployer and cluster master to manage configs for the respective SH and IDX. For example, can I manually pl... See more...
My current Splunk infra setup is clustered for Search Heads and Indexers. and  we are using deployer and cluster master to manage configs for the respective SH and IDX. For example, can I manually placed an updated config in SH1 and then run a rolling restart so they will sync/replicate with each other. ? This is in the event the Deployer is down. But eventually once the Deployer is up , we will place the updated config in Deployer. So that when we run a sync, it will not affect/remove the file from the SH cluster. Will there be any issues in this scenario?