All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

This is just a fun optimization question. The benefit may be very little in fact! My Splunk searches are already optimized joining 24 million events across 3 sourcetypes in just about 40 seconds sea... See more...
This is just a fun optimization question. The benefit may be very little in fact! My Splunk searches are already optimized joining 24 million events across 3 sourcetypes in just about 40 seconds searching over 30 days by using the stats method for joining data. - https://conf.splunk.com/files/2019/slides/FNC2751.pdf However, before I do all the join operations using stats, I have to first use stats latest() to ensure each event is the latest. That is because all my sourcetypes have historical data, but has unique identifiers. Not all sourcetypes have data every single day, so I have to look back at least 30 days to get a reasonably complete picture. Here's an example stats latest():     <initial search> | fields _time, xxx, xxx, <pick your required fields> | eval coalesced_primary_key=coalesce(sourcetype_1_primary, sourcetype_2_primary, sourcetype_3_primary) | stats latest(*) AS * by coalesced_primary_key     The total events in the index before the implicit search (first line) is run are 24,000,000 events. After the implicit search, but before stats latest() is run, I have 13,000,000 events total. After stats latest() is run, total becomes 750,000 events. What if the "stats latest" pipe was skipped altogether? By somehow making the implied search (first line) to return only the latest events. In other words, cutting the event total from 24,000,000 to 750,000 events directly? That can optimize the query to be much faster if this is possible. I have the unique primary keys for each sourcetype already, so it would be the idea of using latest(sourcetype_1_primary) but in the first line implicit search. I'm afraid my Splunk knowledge doesn't help me there, and googling doesn't seem to pull up anything.
I am seeing the following alert on the Searching and Reporting App and also within the InfoSec App for Splunk. [idx-1,idx-2,sh-2] Could not load lookup=LOOKUP-threatprotect-severity I am not sure h... See more...
I am seeing the following alert on the Searching and Reporting App and also within the InfoSec App for Splunk. [idx-1,idx-2,sh-2] Could not load lookup=LOOKUP-threatprotect-severity I am not sure how to go about troubleshooting this further.  Thx.
@isoutamo This is my settings.json.    { "liveServer.settings.AdvanceCustomBrowserCmdLine":"chrome", "editor.fontSize": 24, "workbench.editor.enablePreview": false, "splunk.command... See more...
@isoutamo This is my settings.json.    { "liveServer.settings.AdvanceCustomBrowserCmdLine":"chrome", "editor.fontSize": 24, "workbench.editor.enablePreview": false, "splunk.commands.splunkRestUrl": "https://<SERVER_NAME>:8089", "splunk.commands.token": "<TOKEN>", "splunk.reports.SplunkSearchHead": "https://<SERVER_NAME>:8080", "notebook.lineNumbers": "on", "terminal.integrated.profiles.windows": { "PowerShell": { "source": "PowerShell", "icon": "terminal-powershell" }, "Command Prompt": { "path": [ "${env:windir}\\Sysnative\\cmd.exe", "${env:windir}\\System32\\cmd.exe" ], "args": [], "icon": "terminal-cmd" }, "Git Bash": { "source": "Git Bash" }, "Windows PowerShell": { "path": "C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\powershell.exe" } }, "terminal.integrated.defaultProfile.windows": "Git Bash", "files.exclude": { "**/.git": false }, "workbench.colorTheme": "Visual Studio Dark", "workbench.iconTheme": "vscode-icons", "liveServer.settings.donotShowInfoMsg": true, "workbench.commandPalette.history": 500, "settingsSync.ignoredSettings": [ ] }   I ran  lsof -i | grep 8089 on the Splunk server and its listening.   lsof -i | grep 8089 splunkd 62692 splunk 29u IPv4 581627143 0t0 TCP <SERVER_NAME>:59190-><SERVER_NAME>:8089 (ESTABLISHED) java 66146 splunk 84u IPv4 927511885 0t0 TCP localhost:43216->localhost:8089 (ESTABLISHED) splunkd 86761 splunk 4u IPv4 317159394 0t0 TCP *:8089 (LISTEN) splunkd 86761 splunk 151u IPv4 927515713 0t0 TCP localhost:8089->localhost:43216 (ESTABLISHED)   I ran netstat -ano | find /i "8089"   TCP 0.0.0.0:8089 0.0.0.0:0 LISTENING 6080   Ran my splnb file in VSC, and reran the netstat command.   TCP 0.0.0.0:8089 0.0.0.0:0 LISTENING 6080 TCP 10.37.112.133:29160 10.100.47.105:8089 TIME_WAIT 0   No I know an attempt was made. I started a Wireshark trace and reran my splnb file. The TLS handshake, certs, etc. seem to exchange without any issue. However, once my laptop sends application data, the Splunk server responds with "Encrypted Alert" My laptop responds to the "Encrypted Alert" with one of its own. Then a 4-way graceful disconnect. How do I find out on the Splunk server what caused it to send an Encypted Alert? My SPL is    index=_internal | stats count by component   Thanks for your help. It is late here. Enjoy your weekend and God bless, Genesius
Hi All, I have a soap request and response being ingested in the splunk under an index. There are multiple API calls available under the particular index="wireless_retail". How to get the list of al... See more...
Hi All, I have a soap request and response being ingested in the splunk under an index. There are multiple API calls available under the particular index="wireless_retail". How to get the list of all API calls under this index which consume RESPONSETIME > 30sec. My sample soap request and response in splunk is : <getOrderServiceResponse xmlns=> <Case>90476491</Case> <SalesOrderId>8811662</SalesOrderId> <CustType>GW CONSUMER</CustType> <CustNodeId>4000593888</CustNodeId>  <AccountId>4001293845</AccountId> <ServiceName>4372551943</ServiceName> <ServiceId>4000996500</ServiceId> <BillCyclePeriod>11/07/2023 - 06/19/2024</BillCyclePeriod> <NextBillDueDate>06/03/2024</NextBillDueDate> <TabAcctBalance/> <DeviceUnitPrice>0.00</DeviceUnitPrice> <DepositAcctBalance/> <tabAmount>0.00</tabAmount> <tabMonthlyFee>0.00</tabMonthlyFee> <tabDepletionRate>0.00</tabDepletionRate> <deviceOutrightCost>0.00</deviceOutrightCost> <deviceOutrightPayment>0.00</deviceOutrightPayment> <ConnectionFeeDetails> <connectionFee>45.00</connectionFee> <connectionFeePromoCode>CF9 Connection Fee Promo</connectionFeePromoCode> <connectionFeePromoValue>45.00</connectionFeePromoValue> <netConnectionFee>0.00</netConnectionFee> </ConnectionFeeDetails> </getOrderServiceResponse> </soapenv:Body> </soapenv:Envelope>", RETRYNO="0",, OPERATION="getOrderService", METHOD="SOAP", CONNECTORID="48169c3e-9d28-4b8f-9b9f-14ca83299cca", CONNECTORNAME="SingleView", CONNECTORTYPE="Application", CONNECTORSUBTYPE="SOAP", STARTTIME="1715367648945", ENDTIME="1715367688620", RESPONSETIME="39675", So my sample API req is getOrderServiceResponse and getOrderServiceRequest. Like this there are multiple API calls available in the index. I want all the API calls along with the RESPONSETIME in a graph format to know which is consuming more than 30seconds. COuuld you please help?
@yuanliuhas many great points but let me add one more thing - this way of ingesting data is really very "Splunk un-friendly". The nested json payload is - for all goals and purposes - just a text blo... See more...
@yuanliuhas many great points but let me add one more thing - this way of ingesting data is really very "Splunk un-friendly". The nested json payload is - for all goals and purposes - just a text blob for Splunk during automatic event processing. True, you can extract the message field using KV_MODE=json (or even have it as an indexed field with INDEXED_EXTRACTIONS=json but that would be a horrible idea) but you can't make Splunk parse that field further automatically. If you need to do anything further with it you need to explicitly call spath to parse the contents. It is important because with auto-extracted json fields you can just search for key=value pairs and the search will be relatively efficient because Splunk firstly searches for the values in the indexed data and then checks if the even parses properly so that the key matches value. But if you have your whole payload as the message field, you don't have any fields, so Splunk cannot search for field values so it first have to parse all events from given time range only to match some of them because of some condition. It's highly inefficient. This "envelope" is a very very bad thing from Splunk's point of view.  
I suppose the problem lies elsewhere. Your point is valid - the "regex" is not very well written but those asterisks are actually a bit superfluous and shouldn't break anything. From the original qu... See more...
I suppose the problem lies elsewhere. Your point is valid - the "regex" is not very well written but those asterisks are actually a bit superfluous and shouldn't break anything. From the original question (which was a bit of a "stream of conciousness" without paragraph breaks and no spaces after full stops) I suppose that the stats values() produces multivalued fields because a single correlationId can apply to several different files which can have different results each and so on. But that's just my suspicion.
It might be doable with the transaction command but it's usually not a good idea (transaction is a relatively "heavy" command and has its limitations). I'd go with streamstats and reset_before, rese... See more...
It might be doable with the transaction command but it's usually not a good idea (transaction is a relatively "heavy" command and has its limitations). I'd go with streamstats and reset_before, reset_after and time_window options. (can't give you a ready-made answer at the moment since I'm away from my Splunk environment but that's the way I'd try)
OK. What problem are you actually trying to solve here? And I'm not asking for the answer "I want to delete events". Why do you want to delete those events? What are those events? How did they get in... See more...
OK. What problem are you actually trying to solve here? And I'm not asking for the answer "I want to delete events". Why do you want to delete those events? What are those events? How did they get into your indexes? Why are you so eager to delete them (mind you they don't actually get deleted from disk, they just get marked as unreadable but are still present in the index files) instead of just letting them normally roll to frozen with time.
It's a relatively old thread but I'll add my three cents. Assuming we're pondering deleting events from the index based on which an accelerated datamodel summary is created (without DAS the answer t... See more...
It's a relatively old thread but I'll add my three cents. Assuming we're pondering deleting events from the index based on which an accelerated datamodel summary is created (without DAS the answer to the question is obvious because the search from datamodel is simply translated to raw event search and executed against indexed events), as long as the deleted data is within the backfill range, I'd expect the summarization search to adjust the summary accordingly on the next scheduled run. If the deleted data falls into summary range but out of backfill range, I would expect the summary to stay untouched because there is no mechanism to update the summary.
Did you just move your lookup or did you adjust field names as well?
Hello @Pastea, You can try this https://community.splunk.com/t5/All-Apps-and-Add-ons/Addon-Builder-Configuration-Pages-Don-t-Work-on-a-Search-Head/m-p/679681/thread-id/80304
Moving lookup after chart fetch nothing.
Hello @cameronjust , You can use a setting in the server.conf called conf_replication_include to force the replication of the file containing the accounts. https://docs.splunk.com/Documentatio... See more...
Hello @cameronjust , You can use a setting in the server.conf called conf_replication_include to force the replication of the file containing the accounts. https://docs.splunk.com/Documentation/Splunk/9.2.1/Admin/Serverconf After creating an account, the account_file is created in the local folder of your app. Example: [shclustering] conf_replication_include.<app_account_file_without_extension> = true
I have the latest(?) Splunk VSCode extension on my splunk instance. That instance is on my laptop too. If you are trying to use remote instance you must use correct node name and port on settings.jso... See more...
I have the latest(?) Splunk VSCode extension on my splunk instance. That instance is on my laptop too. If you are trying to use remote instance you must use correct node name and port on settings.json instead of localhost. I'm not sure if I have run this against another splunk instances or only towards my on dev/test at the same node than running viscode.
Hi @czql5v  So, what I  mean by it may be elsewhere, is say for example, a software engineer develops an authentication application, they may well log data in the log files to show why the user's ... See more...
Hi @czql5v  So, what I  mean by it may be elsewhere, is say for example, a software engineer develops an authentication application, they may well log data in the log files to show why the user's log is failing along side other events. Now for Microsoft they log a lot of events, and do they actually log why?, yes for some, example eventID 4625 is bad password and we know that, and we can look for that. As you said its not a bad a password, so this is really a Microsoft related issue, its not Splunk. Splunk is designed to ingest logs file, as you have done via AD, and we search those logs to find information, but if that data, eventID or information is not in the log file then we can can't search for it. May be look at some of Microsoft forums and post a question there, they may be able to help debug the issue or even tell you what eventID that is to this issue, if there is such an eventID.      
Hi @David.Teng, Thanks for asking your question on the Community and then sharing the solution! Glad you were able to figure it out.
Hi Deepakc, In the details of the search in Splunk I can see that there is a logon account which I search on - also a source source workstation at least 3 different ones with the eventcode=4776 and ... See more...
Hi Deepakc, In the details of the search in Splunk I can see that there is a logon account which I search on - also a source source workstation at least 3 different ones with the eventcode=4776 and 3 different hosts which are the Domain Controllers of the domain.  I assume the hosts are where the user is attempting to validate credentials. Does this mean that the user is attempting to validate from different workstations and the validation will go to the nearest DC in the Domain.  So I assume the source workstation is where the user is attempting to login from?  Regards.
Hi Deepakc, The user is definitely not typing the wrong password. What happens is that his account gets locked out when he is actually logging in after he has been of his machine to get a cup of tea... See more...
Hi Deepakc, The user is definitely not typing the wrong password. What happens is that his account gets locked out when he is actually logging in after he has been of his machine to get a cup of tea or something similar. When you say "if its not in the event data" what do you mean by that. Where would i see event data. I hope the above helps. Regards.    
Thanks @isoutamo  I  made your suggested changes, including created a new token. Unforutnately, it didn't work.  WARN: call not properly authenticated There is zeero usable info on the Internet ab... See more...
Thanks @isoutamo  I  made your suggested changes, including created a new token. Unforutnately, it didn't work.  WARN: call not properly authenticated There is zeero usable info on the Internet about this error. Plus, when I run a Wireshark capture the token and other info indicates the authentication is not leaving my PC. The issue appears to be within VSCode and the Splunk Extension. Thanks and God bless.  Genesius      
Try doing your lookup after the chart index=application_na sourcetype=my_logs:hec source=my_Logger_PROD retrievePayments* returncode=Error | rex field=message "Message=.* \((?<apiName>\w+?) -" ... See more...
Try doing your lookup after the chart index=application_na sourcetype=my_logs:hec source=my_Logger_PROD retrievePayments* returncode=Error | rex field=message "Message=.* \((?<apiName>\w+?) -" | chart count over client by apiName | lookup My_Client_Mapping client OUTPUT ClientID ClientName Region