All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

i am also facing some issue after running this command on window machine- also what configuration require on window side to forward logs on splunk cloud as i am new so i am facing this issue fro... See more...
i am also facing some issue after running this command on window machine- also what configuration require on window side to forward logs on splunk cloud as i am new so i am facing this issue from last 3 days 
Thank you @gcusello I tried this solution, but it didn't work I think Splunk reads the computer name from another file that has a higher priority
The reason you are getting this message is because the indexers do not have the LOOKUP-SFDC-USER_NAME The following Knowledge Article explains what is happening.  To get off this message I would su... See more...
The reason you are getting this message is because the indexers do not have the LOOKUP-SFDC-USER_NAME The following Knowledge Article explains what is happening.  To get off this message I would suggest you open a support case and the Splunk Cloud engineer will be able to address this for you. Robertino    
In a clustered environment, you can also enable ‘list_dist_peer’ to effectively view the overall status of the Monitoring Console.
By default, the return command returns only the first value of the specified fields.  Use return n to return n number of values.
Hi,  I use a multiselect drilldown input to select items I want to check, so the inputs would be like "NB, IPhone, Mac, PC", or "NB, IPhone" and I want to change inputs into another format like b... See more...
Hi,  I use a multiselect drilldown input to select items I want to check, so the inputs would be like "NB, IPhone, Mac, PC", or "NB, IPhone" and I want to change inputs into another format like below so I can use it in subsearch: Device=NB OR Device=IPhone OR Device=Mac OR Device=PC  
I have a similar problem but i have to do it recursively e.g. 2nd row - 1st row, 4th - 3rd row, 6th - 5th and so on and so forth e.g. how can we do it in Splunk ( I am doing a workaround and exportin... See more...
I have a similar problem but i have to do it recursively e.g. 2nd row - 1st row, 4th - 3rd row, 6th - 5th and so on and so forth e.g. how can we do it in Splunk ( I am doing a workaround and exporting to Excel and then using = A2-A1, A4-A3). Is it possible to do it in the query itself. Value 43 65.     = 22 24 47.    = 23 36 62. = 26    
Thanks. I was able to use strptime and convert it to Epoch and use strftime to the format i wanted. Thank you. 
Hello, I accepted your suggestion as solution.   I  would sort by Score if I had Score2 and Score 3 I made some modifications. I used addcoltotals, added "total other", and added Score 2 and S... See more...
Hello, I accepted your suggestion as solution.   I  would sort by Score if I had Score2 and Score 3 I made some modifications. I used addcoltotals, added "total other", and added Score 2 and Score 3. The only problem is I don't know where Expense no "21"  came from.    Can you take a look at my search below and see if it looks correct? Thank you for your help | makeresults format=csv data="Expense,Name,Score,Score2,Score3 1,Rent,2000,20000,200000 2,Car,1000, 10000,100000 3,Insurance,700,7000,70000 4,Food,500,5000,50000 5,Education,400,4000,40000 6,Utility,200,2000,30000 7,Entertainment,100,1000, 10000 8,Gym,70,700,70000 9,Charity,50,500,5000" | sort 0 -Score | streamstats count as row sum(Score) as running, sum(Score2) as running2, sum(Score3) as running3 | eventstats count(Name) as total_name, sum(Score) as total, sum(Score2) as total2, sum(Score3) as total3 | where row <= 6 | eval Score=case(row == 6, total - running + Score, true(), Score) | eval Score2=case(row == 6, total2 - running2 + Score2, true(), Score2) | eval Score3=case(row == 6, total3 - running3 + Score3, true(), Score3) | eval other_name_ct = total_name - 5 | eval Name=case(row == 6, "Other(". other_name_ct.")", true(), Name) | addcoltotals labelfield=Name | fields - row running running2 running3 total total2 total3    
I have JSON files which I am trying to event split as the JSON contains multiple events within each log. Here is an example of what the log would look like.     { "vulnerability": [ { ... See more...
I have JSON files which I am trying to event split as the JSON contains multiple events within each log. Here is an example of what the log would look like.     { "vulnerability": [ { "event": { "sub1": { "complexity": "LOW" }, "sub2": { "complexity": "LOW" } }, "id": "test", "description": "test", "state": "No Known", "risk_rating": "LOW", "sources": [ { "date": "test" } ], "additional_info": [ { "test": "test" } ], "was_edited": false }, { "event": { "sub1": { "complexity": "LOW" }, "sub2": { "complexity": "LOW" } }, "id": "test", "description": "test", "state": "No Known", "risk_rating": "LOW", "sources": [ { "date": "test" } ], "additional_info": [ { "test": "test" } ], "was_edited": false } ], "next": "test", "total_count": 109465 }      In this example there would be two separate events that I need extracted out. I am essentially trying to pull out the event1 and event2 nests. Each log should have this same exact JSON format but there could be any number of events included in them.  First event     { "event": { "sub1": { "complexity": "LOW" }, "sub2": { "complexity": "LOW" } }, "id": "test", "description": "test", "state": "No Known", "risk_rating": "LOW", "sources": [ { "date": "test" } ], "additional_info": [ { "test": "test" } ], "was_edited": false }     Second event   { "event": { "sub1": { "complexity": "LOW" }, "sub2": { "complexity": "LOW" } }, "id": "test", "description": "test", "state": "No Known", "risk_rating": "LOW", "sources": [ { "date": "test" } ], "additional_info": [ { "test": "test" } ], "was_edited": false }       I also want to exclude the opening      { "vulnerability": [     and closing      ], "next": "test", "total_count": 109465 }      portions of the log files.   Am I missing something on how to set this sourcetype up? I have the following currently but that does not seem to be working LINE_BREAKER = \{(\r+|\n+|\t+|\s+)"event":
Is the semantic meaning of of dt_day day of year?  For that, Splunk uses %j. (%d is day of month.  But you cannot have day of month without month.)  Meanwhile, it is much better to simply convert the... See more...
Is the semantic meaning of of dt_day day of year?  For that, Splunk uses %j. (%d is day of month.  But you cannot have day of month without month.)  Meanwhile, it is much better to simply convert the entire eventTime to epoc.   | makeresults format=csv data="eventTime 2024-01-30T05:00:27Z" ``` data emulation above ``` | eval eventTime = strptime(eventTime, "%Y-%m-%dT%H:%M:%SZ") | eval dt_day = strftime(eventTime, "%j") | fieldformat eventTime = strftime(eventTime, "%F %T")   For this you get dt_day eventTime 030 2024-01-30 05:00:27 But if you really want day of month without month, you can skip all the conversion and treat eventTime as a simple string.   | makeresults format=csv data="eventTime 2024-01-30T05:00:27Z" ``` data emulation above ``` | eval dt_year = mvindex(split(eventTime, "T"), 0) | eval dt_day = mvindex(split(dt_year, "-"), -1)   This gives you dt_day dt_year eventTime 30 2024-01-30 2024-01-30T05:00:27Z Hope this helps. 
hi, i am setting up a search head/indexer setup.  i have port 9997 listening on indexer, i configured searchhead to send to indexer (since i have the files being sent to search head).  i can see th... See more...
hi, i am setting up a search head/indexer setup.  i have port 9997 listening on indexer, i configured searchhead to send to indexer (since i have the files being sent to search head).  i can see the syn packets being sent from search head to indexer, but thats about it. i am not sure what the indexer is doing about it, its not sending any error back or anything. capture tcp dump on indexer     capture tcp dump and logs from searchhead.   let me know what i need to do to fix this. thank you in advanced  
When going to CMC -> Forwarders -> Forwarders: deployment, I see that we have 19k+ forwarders, which is completely inaccurate. We have more like 900. It shows 18k+ as missing, and the list has instan... See more...
When going to CMC -> Forwarders -> Forwarders: deployment, I see that we have 19k+ forwarders, which is completely inaccurate. We have more like 900. It shows 18k+ as missing, and the list has instances decommissioned years ago.  I thought I could fix this by telling it to rebuild the forwarder assets via the button under VMC -> Forwarders -> Forwarder monitor setup, but when I click on this, it processes for about a minute, and then nothing changes. The description makes me think it is supposed to clear out the sim_forwarder_assets.csv lookup and rebuild it using only data it sees within the time frame I selected (24 hours). If I open up the lookup, all the entries it had previously are still there.  Am I misunderstanding how this works, or is something broken?
I am sure its fine, but this TA seems a little off (logo and the 'Built By') .. given who Wiz are, what they do and their high profile nature recently of disrupting some bad-guys .. I am keen for oth... See more...
I am sure its fine, but this TA seems a little off (logo and the 'Built By') .. given who Wiz are, what they do and their high profile nature recently of disrupting some bad-guys .. I am keen for others views on this "Built by product-integrations product-integrations"   .. strange.. and the logo seems pixilated.  Our team has recently had some "luck" in getting things vetted that really shouldn't be (and yes we reported) .. so simply saying 'its passed App Vetting" isn't enough for us.
If you use external source control (such as gitlab), this is fairly easy. You can pull down the repo then parse through all of the .json files. Anytime a playbook calls another, it's added to the jso... See more...
If you use external source control (such as gitlab), this is fairly easy. You can pull down the repo then parse through all of the .json files. Anytime a playbook calls another, it's added to the json with the key "playbookName". My quick and dirty powershell code was once I cloned the repo was  Get-ChildItem -Path "[repo path]\*.json" -Recurse | Select-String -Pattern "[playbookName]" -AllMatches | Select-Object -Property Line | Export-Csv -Path [csv path].csv   It's a little tougher if you don't have that easily accessible externally. My best guess (which I haven't personally tested) would be to use the API to loop through every playbook by id, then parse your way down  to that playbookName and count it. Make sure you don't stop at the first match, as there's a chance more than one subplaybook is called.
Hi @tommasoscarpa1 , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi @richgalloway  Your sugestion to use return helped me to make the query works. I have made to make some adjustments too : ..main search |where like(onerowevent, [| inputlookup blabla.csv| <w... See more...
Hi @richgalloway  Your sugestion to use return helped me to make the query works. I have made to make some adjustments too : ..main search |where like(onerowevent, [| inputlookup blabla.csv| <whatever_condition_to_make_onecompare_field>| eval onecompare="\"%".onecompare."%\""|return $onecompare] The only thing is, when I'm using '   |return $onecompare  ', I'm missing one row from the output, even if I test the subsearch separately. I will figure out what is making ' return ' clause skip the row. Regards,
My original time format in the search is  eventID: d7d2d438-cc61-4e74-9e9a-3fd8ae96388d    eventName: StartInstances    eventSource: ec2.amazonaws.com    eventTime: 2024-01-30T05:00:27Z    event... See more...
My original time format in the search is  eventID: d7d2d438-cc61-4e74-9e9a-3fd8ae96388d    eventName: StartInstances    eventSource: ec2.amazonaws.com    eventTime: 2024-01-30T05:00:27Z    eventType: AwsApiCall I am not able to convert it using the strptime function  eval dt_year_epoc = strptime(eventTime, "%Y-%m-%dThh:mm:ssZ") eval dt_day= strftime(dt_year_epoc, "%d") Nothing comes up in dt_day      
How can i take the eventName , instanceId and eventTime in a Pivot Table from the search below : index=aws_cloudtrail sourcetype="aws:cloudtrail" (eventName="StartInstances" OR eventName="StopInstan... See more...
How can i take the eventName , instanceId and eventTime in a Pivot Table from the search below : index=aws_cloudtrail sourcetype="aws:cloudtrail" (eventName="StartInstances" OR eventName="StopInstances" OR eventName="StartDBInstance" OR eventName="StopDBInstance" OR eventName="StartDBCluster" OR eventName="StopDBCluster") AND (userIdentity.type="AssumedRole" AND userIdentity.sessionContext.sessionIssuer.userName="*sched*") | spath "requestParameters.instancesSet.items{}.instanceId" | search "requestParameters.instancesSet.items{}.instanceId"="i-0486ba14134c4355b" | spath "responseElements.instancesSet.items{}.instanceId" | spath "recipientAccountId" Events : awsRegion: us-east-1    eventCategory: Management    eventID: 3a80a688-fa82-4950-b823-69ffc3283862    eventName: StartInstances    eventSource: ec2.amazonaws.com    eventTime: 2024-01-30T11:00:38Z    eventType: AwsApiCall    eventVersion: 1.09    managementEvent: true    readOnly: false    recipientAccountId: XXXXXXXXXXX    requestID: b404437a-ee56-4531-842e-1b10c01f01d3    requestParameters: { [-]      instancesSet: { [-]        items: [ [-]          { [-]            instanceId: i-0486ba14134c4355b          }        ]      }    }
So, they are available in search results as the where clause is working.  So, if I don't want to display them I cannot include them in the email as well ?