All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello, Interesting, I had to adapt a bit the limit query, the source type did not exist on my side: index IN (_cmc_summary, summary) ingest_license=* | table _time ingest_license I tried the s... See more...
Hello, Interesting, I had to adapt a bit the limit query, the source type did not exist on my side: index IN (_cmc_summary, summary) ingest_license=* | table _time ingest_license I tried the search but it is not working, I either have a value in the total_license_limit_gb column or in the Daily License Usage but not both at the same time. So the last Where cannot do the test. Here is the result classified by the daily usage: And classified by the Limit: I tried to do the search differently and also use poolsz but without luck. Any idea ? Thank you again for you great help
@SOClife  If your operations heavily depend on the Investigation API’s simplicity and no workaround (e.g., REST searches) is feasible within your timeline, sticking with v7.x until v8.x matures.   ... See more...
@SOClife  If your operations heavily depend on the Investigation API’s simplicity and no workaround (e.g., REST searches) is feasible within your timeline, sticking with v7.x until v8.x matures.    There are no APIs available:- https://docs.splunk.com/Special:SpecialLatestDoc?t=Documentation/ES/8.0.3/API/InvestigationAPIreference 
Running the search returned exactly the same namespaces as my initial query. There’s no row related to the Load Balancer metrics.
Hi! No, this is not normal - which OS, Splunk version and AME version are you on? Please create a ticket at https://datapunctum.atlassian.net/servicedesk/customer/portal/1 Simon
Hi @Leonardo1998  When you ran that search, did you get a row for your Azure Load Balancer Metrics? What dimensions did it list?   Did this answer help you? If so, please consider: Adding ku... See more...
Hi @Leonardo1998  When you ran that search, did you get a row for your Azure Load Balancer Metrics? What dimensions did it list?   Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing.
Hi @lux209  If you want to dynamically pull in your ingest license on Splunk Cloud then you can use the following: index IN (_cmc_summary, summary) sourcetype=splunk-entitlements | table _time inge... See more...
Hi @lux209  If you want to dynamically pull in your ingest license on Splunk Cloud then you can use the following: index IN (_cmc_summary, summary) sourcetype=splunk-entitlements | table _time ingest_license Below is an update to the original search to include this: index=_internal source=*license_usage.log type=RolloverSummary | append [search index IN (_cmc_summary, summary) sourcetype=splunk-entitlements | table _time ingest_license] | stats latest(ingest_license) as total_license_limit_gb, sum(b) as total_usage_bytes by _time | eval total_usage_gb=round(total_usage_bytes/1024/1024/1024,3) | rename _time as Date total_usage_gb as "Daily License Usage" | where 'Daily License Usage' > total_license_limit_gb   Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing.
Hi @livehybrid , Thanks for the suggestion. I tried running the query you suggested, but unfortunately, it didn't make any difference. The issue still persists.
  Is it normal for this script to run all the time and take up a lot of memory? Is there any way to reduce memory usage?
Hi @Leonardo1998  I have found in the past that sometimes the dimensions extracted by different namespaces can vary slightly. It might be worth running the following query to try and check that res... See more...
Hi @Leonardo1998  I have found in the past that sometimes the dimensions extracted by different namespaces can vary slightly. It might be worth running the following query to try and check that resource_id is available as a dimension to use in your metric searches: | mcatalog values(_dims) WHERE index=<yourAzureMetricIndex> by namespace Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing.
Hi everyone,  In Splunk Cloud, I am trying to monitor Azure Load Balancer metrics using the Splunk Add-on for Microsoft Cloud Services. The Microsoft.Network/loadBalancers namespace has been includ... See more...
Hi everyone,  In Splunk Cloud, I am trying to monitor Azure Load Balancer metrics using the Splunk Add-on for Microsoft Cloud Services. The Microsoft.Network/loadBalancers namespace has been included in the Azure Metrics input configuration, and from the _internal logs, it seems that the data is being retrieved correctly. For example, in the logs from mscs_azure_metrics_collector.py, I see the following entry, which suggests that the metrics are being collected: 2025-04-01 07:34:06,096 +0000 log_level=INFO, pid=3442478, tid=ThreadPoolExecutor-0_4, file=mscs_azure_metrics_collector.py, func_name=_index_resource_metrics, code_line_no=526 | Chunked metrics timespan: 2025-04-01T06:28:04Z/2025-04-01T06:33:06Z for resource /subscriptions/<subscripition-id>/resourceGroups/<resource-group-id>/providers/Microsoft.Network/loadBalancers/<load-balancer-name> However, when I try to find these metrics in Splunk, they are missing. I ran the following query to check the indexes: | mcatalog values(resource_id) WHERE index=* by resource_id, index, metric_name, namespace But there is no trace of the Load Balancer. The only namespaces I see are: microsoft.compute/virtualmachines microsoft.network/virtualnetworkgateways microsoft.storage/storageaccounts Even though the following namespaces have been configured in the input settings: Microsoft.Compute/virtualMachines Microsoft.Storage/storageAccounts Microsoft.Storage/storageAccounts/blobServices Microsoft.Storage/storageAccounts/fileServices Microsoft.Storage/storageAccounts/queueServices Microsoft.Storage/storageAccounts/tableServices Microsoft.Network/loadBalancers Microsoft.Network/applicationGateways Microsoft.Network/virtualNetworkGateways Microsoft.Network/azureFirewalls Does anyone know where these metrics might be going? Is there another way to verify if Splunk is actually indexing Load Balancer metrics? Thanks in advance for any help!  
Hi livehybrid, Yes I did change the host=macdev in my test, and you are correct I forgot to mention that I'm testing this in Splunk Cloud, for the on prem infra I'm just looking for the default lice... See more...
Hi livehybrid, Yes I did change the host=macdev in my test, and you are correct I forgot to mention that I'm testing this in Splunk Cloud, for the on prem infra I'm just looking for the default license alert messages in the internal logs as it already contain the information when the we exceed the license, but this doesn't work on the cloud. I tested your query, it is working perfectly and it is much faster than my initial query. Would it be complicated to dynamically get the license limit information on the cloud ? I see that the CMC has it with `sim_licensing_limit` but it is not usable outside the CMC.. Thanks !
Hello , In versions 9.3.x, 9.4.x there is a "route" parameter added by default http stanza of splunk_httpinput app (inputs.conf). Make sure this parameter is not overridden by any local inputs.conf... See more...
Hello , In versions 9.3.x, 9.4.x there is a "route" parameter added by default http stanza of splunk_httpinput app (inputs.conf). Make sure this parameter is not overridden by any local inputs.conf. You can try adding this parameter to the http stanza of your HF "route=has_key:_dstrx:typingQueue;has_key:_linebreaker:rulesetQueue;absent_key:_linebreaker:parsingQueue" and try.
Oh, I love when simple workarounds do the trick. Well, glad you got that one sorted. If I can help with anything else, even for a brainstorming session, lemme know
Hi Victor, Thank you so much for your quick responses!!!! I had a chat with the splunk specialist yesterday morning and splunk only asks for the user and pwd when configuring oauth account in splun... See more...
Hi Victor, Thank you so much for your quick responses!!!! I had a chat with the splunk specialist yesterday morning and splunk only asks for the user and pwd when configuring oauth account in splunk. Hence, we decided to give the splunk user account in snow UI login access temporarily. After successfully going through the account configuration in splunk we removed the ui access to the splunk account in snow. This made the trick!!!   regards, Max
Hi @Kyles, What version of DBConnect are you using? The 3.* versions allow you to pass the field named params with values to be passed to the variables in the query/procedure: dbxquery connection=... See more...
Hi @Kyles, What version of DBConnect are you using? The 3.* versions allow you to pass the field named params with values to be passed to the variables in the query/procedure: dbxquery connection=<string> [fetchsize=<int>] [maxrows=<int>] [timeout=<int>] [shortnames=<bool>] query=<string> OR procedure=<string> [params=<string1,string2>] Sample query: dbxquery procedure="{call sakila.test_procedure(?, ?)}" connection="mysql" params="150,BOB"  Please try that out if you are running DBConnect 3.x and let me know how it goes
Okay, The OAUTH integration message asks for credentials and says that that account will be able to interact as if it was you, right? And based on what you said, this is what happening (assuming tha... See more...
Okay, The OAUTH integration message asks for credentials and says that that account will be able to interact as if it was you, right? And based on what you said, this is what happening (assuming that whenever that ServiceNow screen shows up you are adding your own credentials to allow OAUTH, instead of the credentials of the local API account you should have on ServiceNow for this purpose as you also mentioned that the account does NOT have interactive UI access). IMO and based in your statement, I believe in that authorization part you entered your own credentials and now during your tests this is the reason why ServiceNow is showing Splunk interactions as if it was you. So, though I'm just repeating myself, I guess the path is: If using basic auth: Create a new ServiceNow account config in the Splunk TA as you shown before, define it as basic On your alerts, whenever configuring the send event / incident action, pass the name of the account you created here You're done! If using OAUTH: Create a new ServiceNow account config in the Splunk TA as you shown before, define it as OAUTH While being redirected to ServiceNow OAUTH authorization page, insert the API account user name and password (not your own) - At this step, if your API account isn't accepted, then you cannot use it as it lacks those interactive permissions I mentioned According to the message, whatever communication between Splunk and ServiceNow with that OAUTH channel will use THAT logged account during auth phase  On your alerts, whenever configuring the send event / incident action, pass the name of the account you created here You're done!  
If you want to use fieldformat, use it as the very last thing you do in the pipeline - when you fieldformat inside the join, it's likely that the format is lost in the join process - fieldformat is s... See more...
If you want to use fieldformat, use it as the very last thing you do in the pipeline - when you fieldformat inside the join, it's likely that the format is lost in the join process - fieldformat is simply for rendering data, while keeping the underlying data type. The stats mechanism can be used on any number of datasets - you need to understand your data to be able to process different datasets in the same pipeline, using coalesce() to get a common ID to organise the results is as you can see in my example. It is good to try to avoid join, as it has limitations and if you hit those limitations, the results will silently ignore any effect of that limit.  
All, We are investigating a move from v7 to v8.    We currently rely heavily on the Investigation API  however per the documentation it is no longer available in v8.  The v8 API also seems to be m... See more...
All, We are investigating a move from v7 to v8.    We currently rely heavily on the Investigation API  however per the documentation it is no longer available in v8.  The v8 API also seems to be missing a get call for notable_events.   Is there another way in the API that we can pull details on the enterprise security events, investigations and assets for v8 or do we need to hold off on upgrading while the product matures? 
Hi @Damndionic, The issue with your case function is the use of comparison operators that aren't correctly evaluating version strings. The case function cannot operate mathematical functions like gr... See more...
Hi @Damndionic, The issue with your case function is the use of comparison operators that aren't correctly evaluating version strings. The case function cannot operate mathematical functions like greater or less-than on the string value provided. Here's how you can fix it: Replace the conditional checks with regular expressions to correctly identify "iOS" and "Android". Use match() for regex comparisons. Here's a revised version of your query: | rex field=userRiskData.general "do\:(?<deviceOs>.+?)\|di\:(?<deviceId>.+?)\|db\:" | eval validUser=if(isnotnull(userRiskData.uuid),"Valid","Invalid") | eval op = case( match(deviceOs, "^iOS"), "iOS", match(deviceOs, "^Android"), "Android", true, "Other" ) | eval FullResult=validUser. "-" .outcome. "-" .op Explanation: match(deviceOs, "^iOS") checks if the deviceOs string starts with "iOS". match(deviceOs, "^Android") checks if the deviceOs string starts with "Android". The true condition acts as a default to catch other cases. Here is a full example using makeresults: | makeresults | eval userRiskData.general="do:Windows|di:12345|db:789", userRiskData.uuid="abc-123-uuid" | append [| makeresults | eval userRiskData.general="do:Linux|di:67890|db:321", userRiskData.uuid=null] | append [| makeresults | eval userRiskData.general="do:macOS|di:54321|db:333", userRiskData.uuid="def-456-uuid"] | append [| makeresults | eval userRiskData.general="do:iOS 19|di:98765|db:444", userRiskData.uuid="ghi-789-uuid"] | append [| makeresults | eval userRiskData.general="do:iOS 12|di:19283|db:555", userRiskData.uuid=null] | rex field=userRiskData.general "do\:(?<deviceOs>.+?)\|di\:(?<deviceId>.+?)\|db\:" | eval validUser=if(isnotnull(userRiskData.uuid),"Valid","Invalid") | eval op = case( match(deviceOs, "^iOS"), "iOS", match(deviceOs, "^Android"), "Android", true(), "Other" ) Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I've scowered the internet trying to find a similar issue with no avail.  | rex field=userRiskData.general "do\:(?<deviceOs>.+?)\|di\:(?<deviceId>.+?)\|db\:" | eval validUser=if(isnotnull(userRiskD... See more...
I've scowered the internet trying to find a similar issue with no avail.  | rex field=userRiskData.general "do\:(?<deviceOs>.+?)\|di\:(?<deviceId>.+?)\|db\:" | eval validUser=if(isnotnull(userRiskData.uuid),"Valid","Invalid") | eval op = case(deviceOs>"iOS 1" OR deviceOs<"iOS 999","iOS", deviceOs>"Android 0" OR deviceOs< "Android 999", "Android", 1=1, Other) | eval FullResult=validUser. "-" .outcome. "-" .op I am extracting a device OS from a general field, I don't have permissions to extract it as a perminent field.  When trying to do the eval do truncate the different iOS and Android versions as just "iOS" and "Android", the case is only showing the first OS type in the query. If i change the order to android it'll show android and no iOS, if i keep it as it, it only shows iOS. Is this due to the rex command or am i messing up syntax somewhere?