All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello @bowesmana , thanks for replying to my post. Regarding your last suggestion, if I got it right, I can assign priority value in the search string itself? So far what I've read is that the ide... See more...
Hello @bowesmana , thanks for replying to my post. Regarding your last suggestion, if I got it right, I can assign priority value in the search string itself? So far what I've read is that the identities and assets are added via lookups to Splunk, from which the information about their priority is pulled. So, if I got your suggestion about assigning priorities in the searches themselves, could you please provide an example? I would really appreciate it! Cheers, Splunky diamond
Hello @gcusello ,  Thanks for replying to my post! I am sorry, but I don't think I quite understand what you are suggesting. Just FYI, here are all the available configurations in the [Configur... See more...
Hello @gcusello ,  Thanks for replying to my post! I am sorry, but I don't think I quite understand what you are suggesting. Just FYI, here are all the available configurations in the [Configure > All configurations]:  I checked multiple settings, but I don't think any of them relate to a specific dashboard that I am looking to change settings for.  Cheers, splunky_diamond.
Hi @Cerum  You didn't mention IP allow lists checks, so might be worth checking your cloud  IP allow list config . In the past this has caught me out, for all your Apps (I'm assuming SaaS types) s... See more...
Hi @Cerum  You didn't mention IP allow lists checks, so might be worth checking your cloud  IP allow list config . In the past this has caught me out, for all your Apps (I'm assuming SaaS types) send to HEC cloud, therefore you may need to add them to your IP Allow list for the Splunk cloud feature (HEC access for ingestion), that is if you are even using IP allow lists, if you haven't then all the features are accessible and this is not the issue.  https://docs.splunk.com/Documentation/SplunkCloud/9.1.2312/Config/ConfigureIPAllowList       
Hi @splunky_diamond, did you tried to go in [Configure > Incident Review]? Surely, in this dashboard it's possible to change the time picker of the Incident Review dashboard, I'm not sure that's th... See more...
Hi @splunky_diamond, did you tried to go in [Configure > Incident Review]? Surely, in this dashboard it's possible to change the time picker of the Incident Review dashboard, I'm not sure that's the same thing also for Securty Posture. Ciao. Giuseppe
Try starting with something like this | streamstats values(aggregator_status) as previous_aggregator_status by aggregator window=1 current=f global=f | eval changetime=if((aggregator_status="Up" and... See more...
Try starting with something like this | streamstats values(aggregator_status) as previous_aggregator_status by aggregator window=1 current=f global=f | eval changetime=if((aggregator_status="Up" and previous_aggregator_status="Error") or (aggregator_status="Error" and previous_aggregator_status="Up"),_time,null()) | where isnotnull(changetime) | streamstats current=t global=f window=2 range(_time) as time_diff2 by aggregator | where aggregator_status="Error"
Hello Splunkers! I want to change the time picker of this dashboard in Enterprise security to provide the count of notables not over the last 24 hours, but over 12 hours.  I tried changing va... See more...
Hello Splunkers! I want to change the time picker of this dashboard in Enterprise security to provide the count of notables not over the last 24 hours, but over 12 hours.  I tried changing values related to time in the source code via GUI: It does not work, for some reason, the changes are not being saved, even though I am hitting the save button. Is there a way to add a time picker for this dashboard, so that we can select our interested time period at any time, and update the dashboard instantly? Thanks in advance for taking time reading and replying to my post
Please share the complete search which is not working. Also, please include some representative anonymised sample events so we can see what you are dealing with.
OK. The easiest thing would indeed be to try to push to the /raw endpoint from your solution to verify whether anything is being sent at all (and checking any available logs on the sender's side if t... See more...
OK. The easiest thing would indeed be to try to push to the /raw endpoint from your solution to verify whether anything is being sent at all (and checking any available logs on the sender's side if there are problems). Aaaaand did you check for the usual culprit of "missing data" - time misconfiguration? It's a fairly common issue that the data is being indexed but it's just indexed at wrong moment in time so that you're not finding it properly. (it's more obvious if you index it ahead of time because then you can find it after some time if your source is constantly sending events but if it's indexing data "late", you won't find it if you're intuitively search for "last 30 minutes" or so).
This is actually a question to your Windows/AD gurus. Splunk is "just" a data processing platform. Splunk can gather data from external sources, search it, analyze, aggregate, visualize and so on but... See more...
This is actually a question to your Windows/AD gurus. Splunk is "just" a data processing platform. Splunk can gather data from external sources, search it, analyze, aggregate, visualize and so on but interpretation of the data and Splunk search results is up to you. You must know what the data you push into Splunk is about.
OK. And what is your problem here? You've shown us some data sample (not in entirety though - it seems it's cut after a comma and we can't know if it's - for example - a well formed XML) but we don't... See more...
OK. And what is your problem here? You've shown us some data sample (not in entirety though - it seems it's cut after a comma and we can't know if it's - for example - a well formed XML) but we don't know what have you tried so far and what isn't working the way you'd expect. Do you have problems extracting fields? Or searching for matching data? Aggregating? Visualizing?
And check permissions. The lookup itself might be OK but you might not have permissions to use it.
No, you can't. Search just pulls data from the index. It doesn't do any inter-event comparisons and such so you can't just get latest event. That's what stats is for. Also remember that in a cluster... See more...
No, you can't. Search just pulls data from the index. It doesn't do any inter-event comparisons and such so you can't just get latest event. That's what stats is for. Also remember that in a clustered environment the latest event will come from just one of the indexers and the search command is a distributed streaming command so it obviously gets distributed to all search peers and runs independently on each of them. How would you like to get latest event from a particular index not knowing if other peers have a more recent event? And since it's a distributed streaming command subsequent commands which do not move the processing to the SH tier (more distributed searching commands - most notably eval) will also get executed on all indexers taking part in the search. So no, search is for searching, stats is for aggregation (and latest() is a form of aggregation).
Hi @SplunkNinja , search in the lookups and in the lookup definitions the automatic lookup named "threatprotect-severity", probably it's missed or there are some missed fields, called by your search... See more...
Hi @SplunkNinja , search in the lookups and in the lookup definitions the automatic lookup named "threatprotect-severity", probably it's missed or there are some missed fields, called by your searches, in the lookup definition. Ciao. Giuseppe
This is just a fun optimization question. The benefit may be very little in fact! My Splunk searches are already optimized joining 24 million events across 3 sourcetypes in just about 40 seconds sea... See more...
This is just a fun optimization question. The benefit may be very little in fact! My Splunk searches are already optimized joining 24 million events across 3 sourcetypes in just about 40 seconds searching over 30 days by using the stats method for joining data. - https://conf.splunk.com/files/2019/slides/FNC2751.pdf However, before I do all the join operations using stats, I have to first use stats latest() to ensure each event is the latest. That is because all my sourcetypes have historical data, but has unique identifiers. Not all sourcetypes have data every single day, so I have to look back at least 30 days to get a reasonably complete picture. Here's an example stats latest():     <initial search> | fields _time, xxx, xxx, <pick your required fields> | eval coalesced_primary_key=coalesce(sourcetype_1_primary, sourcetype_2_primary, sourcetype_3_primary) | stats latest(*) AS * by coalesced_primary_key     The total events in the index before the implicit search (first line) is run are 24,000,000 events. After the implicit search, but before stats latest() is run, I have 13,000,000 events total. After stats latest() is run, total becomes 750,000 events. What if the "stats latest" pipe was skipped altogether? By somehow making the implied search (first line) to return only the latest events. In other words, cutting the event total from 24,000,000 to 750,000 events directly? That can optimize the query to be much faster if this is possible. I have the unique primary keys for each sourcetype already, so it would be the idea of using latest(sourcetype_1_primary) but in the first line implicit search. I'm afraid my Splunk knowledge doesn't help me there, and googling doesn't seem to pull up anything.
I am seeing the following alert on the Searching and Reporting App and also within the InfoSec App for Splunk. [idx-1,idx-2,sh-2] Could not load lookup=LOOKUP-threatprotect-severity I am not sure h... See more...
I am seeing the following alert on the Searching and Reporting App and also within the InfoSec App for Splunk. [idx-1,idx-2,sh-2] Could not load lookup=LOOKUP-threatprotect-severity I am not sure how to go about troubleshooting this further.  Thx.
@isoutamo This is my settings.json.    { "liveServer.settings.AdvanceCustomBrowserCmdLine":"chrome", "editor.fontSize": 24, "workbench.editor.enablePreview": false, "splunk.command... See more...
@isoutamo This is my settings.json.    { "liveServer.settings.AdvanceCustomBrowserCmdLine":"chrome", "editor.fontSize": 24, "workbench.editor.enablePreview": false, "splunk.commands.splunkRestUrl": "https://<SERVER_NAME>:8089", "splunk.commands.token": "<TOKEN>", "splunk.reports.SplunkSearchHead": "https://<SERVER_NAME>:8080", "notebook.lineNumbers": "on", "terminal.integrated.profiles.windows": { "PowerShell": { "source": "PowerShell", "icon": "terminal-powershell" }, "Command Prompt": { "path": [ "${env:windir}\\Sysnative\\cmd.exe", "${env:windir}\\System32\\cmd.exe" ], "args": [], "icon": "terminal-cmd" }, "Git Bash": { "source": "Git Bash" }, "Windows PowerShell": { "path": "C:\\WINDOWS\\System32\\WindowsPowerShell\\v1.0\\powershell.exe" } }, "terminal.integrated.defaultProfile.windows": "Git Bash", "files.exclude": { "**/.git": false }, "workbench.colorTheme": "Visual Studio Dark", "workbench.iconTheme": "vscode-icons", "liveServer.settings.donotShowInfoMsg": true, "workbench.commandPalette.history": 500, "settingsSync.ignoredSettings": [ ] }   I ran  lsof -i | grep 8089 on the Splunk server and its listening.   lsof -i | grep 8089 splunkd 62692 splunk 29u IPv4 581627143 0t0 TCP <SERVER_NAME>:59190-><SERVER_NAME>:8089 (ESTABLISHED) java 66146 splunk 84u IPv4 927511885 0t0 TCP localhost:43216->localhost:8089 (ESTABLISHED) splunkd 86761 splunk 4u IPv4 317159394 0t0 TCP *:8089 (LISTEN) splunkd 86761 splunk 151u IPv4 927515713 0t0 TCP localhost:8089->localhost:43216 (ESTABLISHED)   I ran netstat -ano | find /i "8089"   TCP 0.0.0.0:8089 0.0.0.0:0 LISTENING 6080   Ran my splnb file in VSC, and reran the netstat command.   TCP 0.0.0.0:8089 0.0.0.0:0 LISTENING 6080 TCP 10.37.112.133:29160 10.100.47.105:8089 TIME_WAIT 0   No I know an attempt was made. I started a Wireshark trace and reran my splnb file. The TLS handshake, certs, etc. seem to exchange without any issue. However, once my laptop sends application data, the Splunk server responds with "Encrypted Alert" My laptop responds to the "Encrypted Alert" with one of its own. Then a 4-way graceful disconnect. How do I find out on the Splunk server what caused it to send an Encypted Alert? My SPL is    index=_internal | stats count by component   Thanks for your help. It is late here. Enjoy your weekend and God bless, Genesius
Hi All, I have a soap request and response being ingested in the splunk under an index. There are multiple API calls available under the particular index="wireless_retail". How to get the list of al... See more...
Hi All, I have a soap request and response being ingested in the splunk under an index. There are multiple API calls available under the particular index="wireless_retail". How to get the list of all API calls under this index which consume RESPONSETIME > 30sec. My sample soap request and response in splunk is : <getOrderServiceResponse xmlns=> <Case>90476491</Case> <SalesOrderId>8811662</SalesOrderId> <CustType>GW CONSUMER</CustType> <CustNodeId>4000593888</CustNodeId>  <AccountId>4001293845</AccountId> <ServiceName>4372551943</ServiceName> <ServiceId>4000996500</ServiceId> <BillCyclePeriod>11/07/2023 - 06/19/2024</BillCyclePeriod> <NextBillDueDate>06/03/2024</NextBillDueDate> <TabAcctBalance/> <DeviceUnitPrice>0.00</DeviceUnitPrice> <DepositAcctBalance/> <tabAmount>0.00</tabAmount> <tabMonthlyFee>0.00</tabMonthlyFee> <tabDepletionRate>0.00</tabDepletionRate> <deviceOutrightCost>0.00</deviceOutrightCost> <deviceOutrightPayment>0.00</deviceOutrightPayment> <ConnectionFeeDetails> <connectionFee>45.00</connectionFee> <connectionFeePromoCode>CF9 Connection Fee Promo</connectionFeePromoCode> <connectionFeePromoValue>45.00</connectionFeePromoValue> <netConnectionFee>0.00</netConnectionFee> </ConnectionFeeDetails> </getOrderServiceResponse> </soapenv:Body> </soapenv:Envelope>", RETRYNO="0",, OPERATION="getOrderService", METHOD="SOAP", CONNECTORID="48169c3e-9d28-4b8f-9b9f-14ca83299cca", CONNECTORNAME="SingleView", CONNECTORTYPE="Application", CONNECTORSUBTYPE="SOAP", STARTTIME="1715367648945", ENDTIME="1715367688620", RESPONSETIME="39675", So my sample API req is getOrderServiceResponse and getOrderServiceRequest. Like this there are multiple API calls available in the index. I want all the API calls along with the RESPONSETIME in a graph format to know which is consuming more than 30seconds. COuuld you please help?
@yuanliuhas many great points but let me add one more thing - this way of ingesting data is really very "Splunk un-friendly". The nested json payload is - for all goals and purposes - just a text blo... See more...
@yuanliuhas many great points but let me add one more thing - this way of ingesting data is really very "Splunk un-friendly". The nested json payload is - for all goals and purposes - just a text blob for Splunk during automatic event processing. True, you can extract the message field using KV_MODE=json (or even have it as an indexed field with INDEXED_EXTRACTIONS=json but that would be a horrible idea) but you can't make Splunk parse that field further automatically. If you need to do anything further with it you need to explicitly call spath to parse the contents. It is important because with auto-extracted json fields you can just search for key=value pairs and the search will be relatively efficient because Splunk firstly searches for the values in the indexed data and then checks if the even parses properly so that the key matches value. But if you have your whole payload as the message field, you don't have any fields, so Splunk cannot search for field values so it first have to parse all events from given time range only to match some of them because of some condition. It's highly inefficient. This "envelope" is a very very bad thing from Splunk's point of view.  
I suppose the problem lies elsewhere. Your point is valid - the "regex" is not very well written but those asterisks are actually a bit superfluous and shouldn't break anything. From the original qu... See more...
I suppose the problem lies elsewhere. Your point is valid - the "regex" is not very well written but those asterisks are actually a bit superfluous and shouldn't break anything. From the original question (which was a bit of a "stream of conciousness" without paragraph breaks and no spaces after full stops) I suppose that the stats values() produces multivalued fields because a single correlationId can apply to several different files which can have different results each and so on. But that's just my suspicion.
It might be doable with the transaction command but it's usually not a good idea (transaction is a relatively "heavy" command and has its limitations). I'd go with streamstats and reset_before, rese... See more...
It might be doable with the transaction command but it's usually not a good idea (transaction is a relatively "heavy" command and has its limitations). I'd go with streamstats and reset_before, reset_after and time_window options. (can't give you a ready-made answer at the moment since I'm away from my Splunk environment but that's the way I'd try)