All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

The appendcols command does not correlate the fields, so while the initial search will have been ordered by UserAgent (then _time), the subsearch (from the appendcols) will probably still be in (reve... See more...
The appendcols command does not correlate the fields, so while the initial search will have been ordered by UserAgent (then _time), the subsearch (from the appendcols) will probably still be in (reverse) _time order. Not only that, because of the extra filter on the search, there may be fewer events returned by the appendcols subsearch
Hi, We have a splunk cloud instance, and a few of our systems dont have an out of the box add on, so we decided to try and get data via api. However our instance dosent have any api data inputs, nor... See more...
Hi, We have a splunk cloud instance, and a few of our systems dont have an out of the box add on, so we decided to try and get data via api. However our instance dosent have any api data inputs, nor can we find any way to create an input of our own. We tried to install the add on builder app, but the installation fails every time. Is there any way to create our own add on, or a way to get splunk to pull data via api?
We have a few instances hosted in AWS that are extremely underutilized (single digit avg. cpu% for the 3 months. The AWS compute optimizer has recommended the following changes to the instances Cur... See more...
We have a few instances hosted in AWS that are extremely underutilized (single digit avg. cpu% for the 3 months. The AWS compute optimizer has recommended the following changes to the instances Current Instance Type | Recommended Instance Type c4.4xLarge  | r6i.xlarge c4.8xlarge | r6i.2xlarge and  r6i.xlarge c5.2xLarge | r6i.large,  r6i.xlarge,  t3.medium,  t3.small c5.4xlarge | r6i.2xlarge c5.9xlarge | r6i.4xlarge c5.xLarge | r6i.large t3.medium |  t3.large t3.micro | t3.medium   We noticed that most of the recommendations are about replacing 'compute-optimized' instances with new-gen 'mem-optimized' intances. This also reduced the CPU cores.    Question - can we consider and replace the instances based on the recommendations.
This splunk search is not showing any result.   index=os OR index=linux sourcetype=vmstat OR source=iostat [| input lookup SEI-build_server_lookup.csv where platform=eid_rhel6 AND where NOT (role-c... See more...
This splunk search is not showing any result.   index=os OR index=linux sourcetype=vmstat OR source=iostat [| input lookup SEI-build_server_lookup.csv where platform=eid_rhel6 AND where NOT (role-code-sonar) | fields host | format ] | rex field=host (?<host>\w+)?\..+" | timechart avg(avgWaitMillis) | eval cores=4 | eval loadAvg1mipercore=loadAvg1mi/cores | stats avg(loadAvg1mipercore) as load by host   Please help to correct my search.
Any news ? How did you solve that issue, if ever ?
i got the results with below query index=_audit action="search" search="*" NOT user="splunk-system-user" savedsearch_name="" NOT search="\'|history*" NOT search="\'typeahead*" | rex "index=(?P<myIn... See more...
i got the results with below query index=_audit action="search" search="*" NOT user="splunk-system-user" savedsearch_name="" NOT search="\'|history*" NOT search="\'typeahead*" | rex "index=(?P<myIndex>\w+)\s+\w+=" | stats count by myIndex
Hi, I am looking to parse the nested JSON events. basically need to break them into multiple events. I an trying some thing like this but its just duplicating same record in multiple lines.   ... See more...
Hi, I am looking to parse the nested JSON events. basically need to break them into multiple events. I an trying some thing like this but its just duplicating same record in multiple lines.   | spath path=list.entry{}.fields output=items | mvexpand items   I am looking to get all key/vale pair as single event under  "fields"  Sample Records   { "total": 64, "list": { "entry": [ { "recordId": 7, "created": 1682416024092, "id": "e70dbd86-53cf-4782-aa84-cf28cde16c86", "fields": { "NumDevRes001": 11111, "NumBARes001": 3, "lastUpdated": 1695960000000, "engStartDate": 1538452800000, "RelSupport001": 0, "UnitTest001": 0, "Engaged": 1, "ProdGroup001": 1, "QEResSGP001": 0.5, "QEResTOR001": 1, "QEResLoc001": 3, "SITBugs001": 31, "QEResIND001": 5, "QEResLoc003": 3, "QEResLoc002": 3, "Project": "Registration Employee Directory Services", "AutoTestCount001": 1657, "AppKey001": "ABC", }, "ownedBy": "TEST1" }, { "recordId": 8, "createdBy": "TEST2", "created": 1682416747947, "id": "91e88ae6-0b64-48fc-b8ed-4fcfa399aa3e", "fields": { "NumDevRes001": 22222, "NumBARes001": 3, "lastUpdated": 1695960000000, "engStartDate": 1538452800000, "RelSupport001": 0, "UnitTest001": 0, "Engaged": 1, "ProdGroup001": 1, "QEResSGP001": 0.5, "QEResTOR001": 1, "QEResLoc001": 3, "SITBugs001": 31, "QEResIND001": 5, "QEResLoc003": 3, "QEResLoc002": 3, "Project": "Registration Employee Directory Services", "AutoTestCount001": 1657, "AppKey001": "ABC", }, "ownedBy": "TEST2" } ] } }          
Hello I'm trying to calculate ratio of two fields but im getting wrong results if i'm calculating each one of them separately im getting right results but together something is wrong     ... See more...
Hello I'm trying to calculate ratio of two fields but im getting wrong results if i'm calculating each one of them separately im getting right results but together something is wrong     index=clientlogs sourcetype=clientlogs Categories="*networkLog*" "Request.url"="*v3/auth*" Request.url!=*twofactor* "Request.actionUrl"!="*dev*" AND "Request.actionUrl"!="*staging*" | eval UserAgent = case(match(UserAgent, ".*ios.*"), "iOS FE",match(UserAgent, ".*android.*"), "Android FE",1=1, "Web FE") | dedup UserAgent, _time | stats count as AttemptsFE by UserAgent _time | appendcols [search index=clientlogs sourcetype=clientlogs Categories="*networkLog*" "Request.url"="*v3/auth*" Request.url!=*twofactor* "Request.actionUrl"!="*dev*" AND "Request.actionUrl"!="*staging*" "Request.status" IN (201, 207) NOT "Request.data.twoFactor.otp.expiresInMs"="*" | eval UserAgent = case(match(UserAgent, ".*ios.*"), "iOS FE",match(UserAgent, ".*android.*"), "Android FE",1=1, "Web FE") | dedup UserAgent, _time | streamstats count as SuccessFE by UserAgent _time] | eval SuccessRatioFE = round((SuccessFE/AttemptsFE)*100, 2) | eval SuccessRatioFE = (SuccessFE/AttemptsFE)*100 | timechart bins=100 avg(SuccessRatioFE) as SuccessRatioFE BY UserAgent      
I'm planning to start an integration between Splunk and ESET endpoint security cloud platform, but I facing the following issue: the Syslog-ng server started receiving uncleared/encrypted logs from ... See more...
I'm planning to start an integration between Splunk and ESET endpoint security cloud platform, but I facing the following issue: the Syslog-ng server started receiving uncleared/encrypted logs from the ESET endpoint security, so the logs appear on the HF server like this:  ^A^B  ^L 7 ^] ^W  ^^  ^Y  ^X # ^W (^D^C^E^C^F^C^H^G^H^H^H ^H 2 I think I want to decrypt the logs when received by the syslog-ng because Splunk can't handle any decryption process, I need help with how I can decrypt the logs in the Syslog-ng.
It is not clear what your criteria are for determining what an anomaly is. Also, from your example, you don't need to combine the fields, you could just to something like this | stats sum(error) as... See more...
It is not clear what your criteria are for determining what an anomaly is. Also, from your example, you don't need to combine the fields, you could just to something like this | stats sum(error) as count by Svc Cust Evnt | sort -count
Hi @MattHatter  Did you find a solution to this ? We had exactly the same problem and we managed to get is resolved by editing the lookup_edit file (under Settings - User Interface - Views) as foll... See more...
Hi @MattHatter  Did you find a solution to this ? We had exactly the same problem and we managed to get is resolved by editing the lookup_edit file (under Settings - User Interface - Views) as follows: <view template="lookup_editor:/templates/generic.html" type="html" isDashboard="true" isVisible="true"> <label> Lookup edit </label> </view>
If this is not working, please share your exact dashboard source as something might have been lost in converting the answer to your solution.
Trying to find anomalies for events. I have multiple services and multiple customers. I have an error "bucket" that is caputuring events for failures, exceeded, notified, etc. I'm looking for a way ... See more...
Trying to find anomalies for events. I have multiple services and multiple customers. I have an error "bucket" that is caputuring events for failures, exceeded, notified, etc. I'm looking for a way to identify when there are anomalies or outliers for each of the services/customers. I have combined (eval) service, customer, and the error and just counting the number of error events generated by each service/customer. So for example: svcA svcB svcC custA custB custC would give svcA-custA-failures 10 svcA-custA-exceeded 5 svcA-custA-notified 25 svcB-custA-failures 11 svcB-custA-exceeded 9 svcB-custA-notified 33 svcB-custB-failures 3 svcA-custB-exceeded 7 svcA-custB-notified 22 svcA-custC-exceeded 8 svcA-custC-failures 3 svcA-custC-notified 267 svcC-custC-exceeded 1 svcC-custC-failures 4 svcC-custB-notified 145 svcC-custA-notified 17   Something along the lines of this: | eval Svc-Cust-Evnt=Svc."-".Cust."-".Evnt | stats sum(error) by Svc-Cust-Evnt | rename sum(error) as count | sort -count
You can do those “indexer stuff” with that forwarder licence. Only thing what is missing is indexing. You need to open only management access. Normally this is port 8089/tcp. Then if/when you want t... See more...
You can do those “indexer stuff” with that forwarder licence. Only thing what is missing is indexing. You need to open only management access. Normally this is port 8089/tcp. Then if/when you want to monitor those with MC you need to access also MC -> LC that same port and those as indexer and create some own groups for those etc.
Thank you for your reply, I was looking into the SaaS doc and my future set-up will be on SaaS. Best Regards
@richgalloway  As I'm trying  to exclude the fields like user_watchlist & ip_options under the vpn index etc. Can you pls share the props and transforms  .conf to exclude the above fields by crea... See more...
@richgalloway  As I'm trying  to exclude the fields like user_watchlist & ip_options under the vpn index etc. Can you pls share the props and transforms  .conf to exclude the above fields by creating a  custom app. Thanks
Hi @jwalrath1 , let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all t... See more...
Hi @jwalrath1 , let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
That is really interesting and you are right - I tried these variants C:\Windows\system32\cmd.exe /d /c C:\ProgramData\Symantec\Symantec Endpoint Protection\14.3.8289.5000.105\Data\Definitions\WebEx... See more...
That is really interesting and you are right - I tried these variants C:\Windows\system32\cmd.exe /d /c C:\ProgramData\Symantec\Symantec Endpoint Protection\14.3.8289.5000.105\Data\Definitions\WebExtDefs\20230830.063\webextbridge.exe* C:\Windows\system32\cmd.exe /d /c C:\ProgramData\Symantec\Symantec Endpoint Protection\14.3.8* C:\Windows\system32\cmd.exe /d /c C:\ProgramData\Symantec\Symantec Endpoint Protection\*\webextbridge.exe* and the top two do not work, the last does. If I make the second one end in 14.3.* then it DOES work. Not sure what's going on there, 
You would either have to include that subsearch part as an OR in the outer search and munge the data so you could join the data sets with stats somehow, or create a lookup through a saved search on a... See more...
You would either have to include that subsearch part as an OR in the outer search and munge the data so you could join the data sets with stats somehow, or create a lookup through a saved search on a regular basis (if it changes) and use the lookup to filter rather than the subsearch, then you'd have anything you need
I have a search and subsearch that is working as required but there is a field in the subsearch that I want to display in the final table output but is not a field to be searched on. index=aruba sou... See more...
I have a search and subsearch that is working as required but there is a field in the subsearch that I want to display in the final table output but is not a field to be searched on. index=aruba sourcetype="aruba:stm" "*Denylist add*" OR "*Denylist del*" | eval stuff=split(message," ") | eval mac=mvindex(stuff,4) | eval mac=substr(mac,1,17) | eval denyListAction=mvindex(stuff,3) | eval denyListAction= replace (denyListAction,":","") | eval reason=mvindex(stuff,5,6) | search mac="*:*" [ search index=main host=thestor Username="*adgunn*" | dedup Client_Mac | eval Client_Mac = "*" . replace(Client_Mac,"-",":") . "*" | rename Client_Mac AS mac | fields mac ] | dedup mac,denyListAction,reason | table _time,mac,denyListAction,reason What I want is for the value held in field Username to be included in the table command of the outer search.  How do I pass it from the subsearch to be used in the table command and not used as part of the search? Thanks.