All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi Team,   We need to display single latest event in Splunk by query 
Pro tip: To get help about data analytics, present sample data (in text, anonymize as needed), illustrate desired output (in text), and describe the logic between the data and output.  If you have a ... See more...
Pro tip: To get help about data analytics, present sample data (in text, anonymize as needed), illustrate desired output (in text), and describe the logic between the data and output.  If you have a command that does not give the desired output, illustrate the actual output (anonymize as needed) and explain why it is different from desired output if not painfully obvious).
I have field CI extracted from json payload  { "Name": "zSeries", "Severity":5, "Category":"EVENT", "SubCategory":"Service issues - Unspecified", "TStatus": "OPEN", "CI": "V2;Y;Windows;srv048;... See more...
I have field CI extracted from json payload  { "Name": "zSeries", "Severity":5, "Category":"EVENT", "SubCategory":"Service issues - Unspecified", "TStatus": "OPEN", "CI": "V2;Y;Windows;srv048;LogicalDisk;C:", "Component": "iphone" } Further, i want the CI field value extracted using DELIMS = ";". I have created below props & transforms configuration but not working. [source::cluster_test] REPORT-fields = ci-extraction [ci-extraction] SOURCE_KEY = CI DELIMS = ";" FIELDS = CI_V2,CI_1,CI_2,CI_3,CI_4,CI_5 Any help highly appreciated.  
You are correct for not wanting to use join, for join is perhaps not what you need.  But you need to give us precise prescription about the field or fields you want to join these three indices.  Two ... See more...
You are correct for not wanting to use join, for join is perhaps not what you need.  But you need to give us precise prescription about the field or fields you want to join these three indices.  Two of them have an identical field name "user".  Do they have the same values? (Window and Unix usually do not.)  Then, a different pair of indices have an identical field name "issuer".  Then, there is yet another field name in cyber bearing semantic semblance of a user, namely "requestor".  Is this the field you want to "join" with the "user" field in the other two indices? If you want to join requestor in cyber with user in the other two indices, the following should be your first draf: index IN (cyber, AD, unix) | rename requestor AS user | stats values(_eventtime) as _event_time, values(issuer) as issuer values(purpose) as purpose values(elevID) as evelID, values(action) as action values(path) as path, values(cmd) as cmd by user Even so, there can be variations depending on other requirements.  Unless you give a prescription, others cannot give you a good answer.
Hi @AL3Z ... by "threshold", are you looking to find out the "average".. so that if the threshold(average) is crossed, you will create some alerts, etc.. if so, then, pls check the avg command insid... See more...
Hi @AL3Z ... by "threshold", are you looking to find out the "average".. so that if the threshold(average) is crossed, you will create some alerts, etc.. if so, then, pls check the avg command inside the stats command: | stats avg(attacker_score) as avg_attacker_score by domain  may be we need more details from you, to suggest you better, thanks. 
This problem is not well defined.  But before that, I would caution any data truncation in props.conf. Anyway, you need to prescribe a formula/criterion you want this done.  Are you using the front ... See more...
This problem is not well defined.  But before that, I would caution any data truncation in props.conf. Anyway, you need to prescribe a formula/criterion you want this done.  Are you using the front part (that you discard) or the end part (that you want to preserve) to make the determination?  If front, how are you going to determine that is the front part?  If end, how are you going to determine that it is the end part?  Without giving us this information, there can be a million ways to make this specific log entry trimmed, but most of these methods will fail you in general. One example to do this in the front could be: | eval _raw = replace(_raw, ".*util.SplunkUtil : ", "") Here, I'm assuming that util.SplunkUtil and the spacing are fixed values. One example of using the end to do the same could be | eval _raw = replace(_raw, ".*: *(\[LOGIN_\w+\]|.+)", "\1") Here, I assume that the colon and LOGIN_* in square brackets are the fixture. As you can see, the combinations are infinite.  What exactly is your design?
The answer is revealed in documentation of values.  Use the "AS" modifier.  If you know that each IP only corresponds to one company, the following will do the trick: index=regular_index | stats va... See more...
The answer is revealed in documentation of values.  Use the "AS" modifier.  If you know that each IP only corresponds to one company, the following will do the trick: index=regular_index | stats values(company) as company by ip | table company, ip
Hi, Below is my current search at the moment,  index=o365 sourcetype=* src_ip="141.*" | rex field=_raw "download:(?<download_bytes>\d+)" | rex field=_raw "upload:(?<upload_bytes>\d+)" | dedup ... See more...
Hi, Below is my current search at the moment,  index=o365 sourcetype=* src_ip="141.*" | rex field=_raw "download:(?<download_bytes>\d+)" | rex field=_raw "upload:(?<upload_bytes>\d+)" | dedup UserId, ClientIP | table UserId, download_bytes, upload_bytes | head 10 I am trying to get downloaded bytes and uploaded bytes into a table and find out if anything suspicious is going on in the network however I have been unable to return anything other than the source ip.   Thanks in advance.
Maybe you can explain what is the significance of that top level key "2023-03-16"?  And is the phrase   There ought to be some semantic significance that it changes every day.  Also, is that the only... See more...
Maybe you can explain what is the significance of that top level key "2023-03-16"?  And is the phrase   There ought to be some semantic significance that it changes every day.  Also, is that the only top level key?  If so, I'd say that the developers are making a poor design.  Same can be said about the second level key ("1", "2", ...), which seems to be semantically redundant as third-level key "id".  If you have any influence over developers, maybe suggest that they get rid of second-level key, and just make an array with the 3rd level objects. Anyway, I am not going to assume any semantic significance of the top level key(s?) for now.  I also assume that your desire to use wildcard search is about search in those 3rd level keys such as "name" and "email".  To my understanding, you want a simple search such as name = "michael *" late = "true" without having to confront field names such as 2023-03-16.1.name. This is something you can try:     | spath path=employees | eval date = json_array_to_mv(json_keys(employees)) | mvexpand date | eval day_employees = json_extract(employees, date) | eval employee_id = json_array_to_mv(json_keys(day_employees)) | mvexpand employee_id | eval day_employees = json_extract(day_employees, employee_id) | spath input=day_employees     Your sample data (single date, two id) would give these field values: (There are too many fields so the following is transposed.) fieldname 1 2 _mkv_child 0 1 activeProject.duration   6027 activeProject.project_id   67973 activeProject.project_title   Blue Book activeProject.task_id   42282 activeProject.task_title   Blue Book task afterWorkTime 0 0 arrived false 2023-03-16 09:17:00 atWorkTime 0 6060 beforeWorkTime 0 0 date 2023-03-16 2023-03-16 desktimeTime 0 6027 efficiency 0 14.75 email demo@desktime.com demo3@desktime.com employee_id 1 2 group Accounting Marketing groupId 1 106345 isOnline false true late false true left false 2023-03-16 10:58:00 name Michael Scott Andy Bernard notes.Background   Law and accounting notes.Skype Find.me   notes.Slack MichielS   offlineTime 0 0 onlineTime 0 6027 productiveTime 0 4213 productivity 0 69.9 profileUrl url.com url.com work_ends 00:00:00 18:00:00 work_starts 23:59:59 09:00:00 Here is an emulation that you can play with and compare with real data   | makeresults | eval _raw = "{ \"employees\": { \"2023-03-16\": { \"1\": { \"id\": 1, \"name\": \"Michael Scott\", \"email\": \"demo@desktime.com\", \"groupId\": 1, \"group\": \"Accounting\", \"profileUrl\": \"url.com\", \"isOnline\": false, \"arrived\": false, \"left\": false, \"late\": false, \"onlineTime\": 0, \"offlineTime\": 0, \"desktimeTime\": 0, \"atWorkTime\": 0, \"afterWorkTime\": 0, \"beforeWorkTime\": 0, \"productiveTime\": 0, \"productivity\": 0, \"efficiency\": 0, \"work_starts\": \"23:59:59\", \"work_ends\": \"00:00:00\", \"notes\": { \"Skype\": \"Find.me\", \"Slack\": \"MichielS\" }, \"activeProject\": [] }, \"2\": { \"id\": 2, \"name\": \"Andy Bernard\", \"email\": \"demo3@desktime.com\", \"groupId\": 106345, \"group\": \"Marketing\", \"profileUrl\": \"url.com\", \"isOnline\": true, \"arrived\": \"2023-03-16 09:17:00\", \"left\": \"2023-03-16 10:58:00\", \"late\": true, \"onlineTime\": 6027, \"offlineTime\": 0, \"desktimeTime\": 6027, \"atWorkTime\": 6060, \"afterWorkTime\": 0, \"beforeWorkTime\": 0, \"productiveTime\": 4213, \"productivity\": 69.9, \"efficiency\": 14.75, \"work_starts\": \"09:00:00\", \"work_ends\": \"18:00:00\", \"notes\": { \"Background\": \"Law and accounting\" }, \"activeProject\": { \"project_id\": 67973, \"project_title\": \"Blue Book\", \"task_id\": 42282, \"task_title\": \"Blue Book task\", \"duration\": 6027 } } } }, \"__request_time\": \"1678957028\" }" ``` data emulation above ```  
I have three indexes I am trying to join that have at least three similar columns each. I want to table the results in order to generate a report and alert. What would be the fastest method to work a... See more...
I have three indexes I am trying to join that have at least three similar columns each. I want to table the results in order to generate a report and alert. What would be the fastest method to work around using the join command if possible? Because my environment is built to min specs I need to not utilize something that is not resource heavy. Below is my query the "| table" is where I am having issues. Cyber is my elevated account vault AD is my active directory and the unix is for my redhat environment. I am a little lost currently as I have not played with Splunk in a couple of years. index=cyber  AND index=AD  AND index=unix | table _eventtime, issuer, requestor, purpose (for cyber) | table user, issuer, elevID, action (for AD) | table user, path, cmd (for unix)
Hello,   I have requirement to create a Orange button in splunk dashboard and upon orange button click need to load few panels.   Kindly let me know how this can be accomplished?   Thanks
I'm confused how to truncate from this log. how do I do it from props.conf or from the SPL command? Can anyone provide a solution to this?   <11>1 2021-03-18T15:05:30.501Z abcdefghi-jajaj-b1bc070... See more...
I'm confused how to truncate from this log. how do I do it from props.conf or from the SPL command? Can anyone provide a solution to this?   <11>1 2021-03-18T15:05:30.501Z abcdefghi-jajaj-b1bc07001-xb0k7.abcdefghi-user - - - [Originator@7776 kubernetes__container_name="abcdefghi-jajaj" docker__container_id="a1bbddc80312d8501f1b1ac015d525722f105a71d6521be0728e8b057066eda1" kubernetes__pod_name="abcdefghi-jajaj-b1bc07001-xb0k7" bosh_index="0" stream="stbcd" kubernetes__namespace_name="abcdefghi-develop" bosh_id="e0700d15-ca5a-1f35-8e01-bd83d3eb705a" bosh_deployment="service-instance_f08cb851-fa53-1206-0a6b-705f3fa0f301" docker_id="a1bbddc80312d" tag="kubernetes.var.log.containers.abcdefghi-user-b1bc07001-xb0k7_abcdefghi-develop_abcdefghi-user-a1bbddc80312d8501f1b1ac015d525722f105a71d6521be0728e8b057066eda1.log" instance_type="werkir"] 2021-03-18 22:05:00.210 INFO [abcdefghi-jajaj,3010acf256f7c7e0,717ea36c0d67f3da,true] 6 --- [nio-0020-exec-1] c.id.bankabcde.common.util.SplunkUtil : [LOGIN_abc]|mobilePhone=ABC123|mobilePhone=ABC123|mobilePhone=ABC123|mobilePhone=ABC123|mobilePhone=ABC123|mobilePhone=ABC123|mobilePhone=ABC123|uobilePhone=ABC123|mobilePhone=ABC123|mobilePhone=ABC123|mobilePhone=ABC123|sessionID=ABC123|mobilePhone=ABC123|mobilePhone=ABC123|appVersion=ABC123|mobilePhone=ABC123|custGroup=ABC123   i want to cut it to something like this: [LOGIN_abc]|mobilePhone=ABC123|mobilePhone=ABC123|mobilePhone=ABC123|mobilePhone=ABC123|mobilePhone=ABC123|mobilePhone=ABC123|mobilePhone=ABC123|uobilePhone=ABC123|mobilePhone=ABC123|mobilePhone=ABC123|mobilePhone=ABC123|sessionID=ABC123|mobilePhone=ABC123|mobilePhone=ABC123|appVersion=ABC123|mobilePhone=ABC123|custGroup=ABC123   THANKYOU
Hello, Does stats values command combine unique values? For example: company ip companyA companyA 1.1.1.1 companyB companyB companyB 1.1.1.2 index=regular_index | stats v... See more...
Hello, Does stats values command combine unique values? For example: company ip companyA companyA 1.1.1.1 companyB companyB companyB 1.1.1.2 index=regular_index | stats values(company) by ip | table company, ip Should the command above produce the following output? company ip companyA 1.1.1.1 companyB 1.1.1.2 Thank you so much  
Hello, 1) There is no "collect" command used in ***the group of commands*** (refer to the photo) 2) The merging into one row happened after summary index The group of commands came from perfor... See more...
Hello, 1) There is no "collect" command used in ***the group of commands*** (refer to the photo) 2) The merging into one row happened after summary index The group of commands came from performing lookup on company.csv from vulnerability_index: (imaginary tables, but the concept is the same -  I can't use the real data/field) index=vulnerability_index | lookup company.csv ip as ip OUPUTNEW ip, company, location vulnerability_index ip vulnerability 1.1.1.1 vuln1 1.1.1.2 vuln2  company.csv company ip location companyA 1.1.1.1 locationA1 companyA 1.1.1.1 locationA2 Results:     Note that there is "enter" / "Carriage Return"  between companyA and companyA (in not same row) company ip location companyA companyA 1.1.1.1 locationA1 locationA2 After moving ***the group of commands*** into summary index and used summary index..  it merged the company into one row:   companyA companyA (and also locationA1 locationA2) company ip location companyA companyA 1.1.1.1 locationA1 locationA2 Is this a normal behavior for summary index? If yes, is there a way to keep the regular format? Thank you!!
>>> I've downloaded the splunk security essential files all into my laptop May we know if you downloaded the single tar file (For example, ..splunk-security-essentials_371.tgz) >>> but I can't fi... See more...
>>> I've downloaded the splunk security essential files all into my laptop May we know if you downloaded the single tar file (For example, ..splunk-security-essentials_371.tgz) >>> but I can't figure out how to upload into into splunk enterprise as an app. What is my next step and where do I go to do this? after downloading that tar file (for example..."splunk-security-essentials_371.tgz"), on your splunk, pls go to  (left side Apps dropdown) Apps -- - > Manage Apps --- > Install app from file. then select the tar file and load it, it will install smoothly.. then splunk restart will be required. 
I've downloaded the splunk security essential files all into my laptop, but I can't figure out how to upload into into splunk enterprise as an app. What is my next step and where do I go to do this?
Tested the rex and substr, which works perfect. The abstract giving some troubles, will check it again.  https://docs.splunk.com/Documentation/Splunk/9.1.1/SearchReference/Abstract |makeresults | ... See more...
Tested the rex and substr, which works perfect. The abstract giving some troubles, will check it again.  https://docs.splunk.com/Documentation/Splunk/9.1.1/SearchReference/Abstract |makeresults | eval samplelog="h1 #_\"he$$llohibye" | rex field=samplelog "^(?P<EightCharsRex>........)" | eval EightCharsSubStr=substr(samplelog,1,8) ```| abstract maxterms=9 maxlines=1``` | table samplelog EightCharsRex EightCharsSubStr this produces this result: samplelog EightCharsRex EightCharsSubStr h1 #_"he$$llohibye h1 #_"he h1 #_"he  
Hi @AL3Z ... As said by Rich's reply, the Splunk App or Add-on building can be an easy task if some development experience you got.  maybe, if you are really interested, you could learn it. most app... See more...
Hi @AL3Z ... As said by Rich's reply, the Splunk App or Add-on building can be an easy task if some development experience you got.  maybe, if you are really interested, you could learn it. most apps / add-ons are simple and easy.  I went to the link you provided, looks good. give it a try and update us your views, maybe we can suggest you something, thanks.   
It would help to know the specifics of each query.  Without them, the best I can do is <<query number 1>> | append [ <<query number 2>> ] | append [ <<query number 3>> ] | stats values(*) as * by id
Yes, it's easy to do.  See my reply on 24 Oct.