All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi, I believe the problem is not on AppDynamics side. I just tested and inserted an email address into the string field and it shows correctly Here is my sample Curl populating the data, whi... See more...
Hi, I believe the problem is not on AppDynamics side. I just tested and inserted an email address into the string field and it shows correctly Here is my sample Curl populating the data, which works. Can you try and manually post to analytics and see if it works, might be power automate which doesn't format the email address correctly perhaps? curl -X POST "https://xxxxxxx/events/publish/TEST" -H "X-Events-API-AccountName:xxxxxxxx" -H "X-Events-API-Key:xxxxxx" -H "Content-type: application/vnd.appd.events+json;v=2" -d '[{"expirationDateTime": 1597135561333, "appleId": "test@test.com", "DaysLeft": "176"}]'
Its null. even if i click on event it shows Null value
His AD account , windows system
That's strange Can you run the query "select appleid from intune_vpp1"  does it show null values as well? Also double click on any of the events and check if the email value is shown there in the p... See more...
That's strange Can you run the query "select appleid from intune_vpp1"  does it show null values as well? Also double click on any of the events and check if the email value is shown there in the popup screen or are they all null? I know some characters cause the main view to show null event though there is a value in them, have not populated it with an email address, will do a test on my side as well
What information do you have in Splunk? Which system is the user locked out of?
Please share your two searches (in code blocks)
I am pushing values from power automate to appd schema. All values getting captured under APPd schema but not AppleId which is email ID which i defined as string here appleId value is null ... See more...
I am pushing values from power automate to appd schema. All values getting captured under APPd schema but not AppleId which is email ID which i defined as string here appleId value is null why is it not capturing the value
there is a user , he is saying his account is locked i want to check using splunk what is the cause how can i do that
Hi Uma What do you mean the value is not getting updated, if you run the query in the query browser does it return the correct value? Is it only in the calculated metric where it doesn't return the... See more...
Hi Uma What do you mean the value is not getting updated, if you run the query in the query browser does it return the correct value? Is it only in the calculated metric where it doesn't return the correct value?
What information do you have available to you to help you determine this?
Thanks for the query. I need to send an alert a day before daylight savings in europe i.e Sun, Mar 30, 2025 – Sun, Oct 26, 2025 Could you please tell me how to update this query. Lets say run at 2... See more...
Thanks for the query. I need to send an alert a day before daylight savings in europe i.e Sun, Mar 30, 2025 – Sun, Oct 26, 2025 Could you please tell me how to update this query. Lets say run at 2 PM the day before with the message.
Thanks for the code. It seems to work with that manually. I need to report Cisco that the zero-code instrumentation is not working as expected. I don't have privileges to open a case for that.
Hello to everyone! I'm not sure how to correctly name this thing, but I will carefully try to explain what I want to achieve. In our infrastructure we have plenty of Windows Server instances with U... See more...
Hello to everyone! I'm not sure how to correctly name this thing, but I will carefully try to explain what I want to achieve. In our infrastructure we have plenty of Windows Server instances with Universal Forwarder installed. All servers are divided into groups according to the particular application that the servers host. For example, Splunk servers have group 'spl,' remote desktop session servers have group 'rdsh,' etc. Each server has an environment variable with this group value. By design, the access policy to logs was built on these groups. One group - one index. Because of it, each UF input stanza has the option "index = group.". According to this idea, introspection logs of UF agents are related to the SPL (or Splunk) group\index. And here the nuisance started. Sometimes UF agents report about errors that demand some things on the running hosts, for example, restarting the agent manually. I see these errors because I have access to the 'spl' index, but I don't have access to all Windows machines and I have to notify the machine owner about it manually. So, the question is how to create a sort of tag or field on the UF that can help me separate all Splunk UF logs by these groups? Maybe I can use our environment variable to achieve it? I only need to access this field during search time to create various alerts that will notify machine owners instead of me.
Ahh... ok. If it is suppossed to mean all results for the max value of field1, it's also a relatively easy to use sort and streamstats. Your typical | sort - field1 will give you your data sorted ... See more...
Ahh... ok. If it is suppossed to mean all results for the max value of field1, it's also a relatively easy to use sort and streamstats. Your typical | sort - field1 will give you your data sorted in descending order. That means that you have your max values first. This in turn means that you don't have to eventstats over whole resukt set. Just use streamstats to copy over the first value which must be the maximum value. | streamstats current=t first(field1) as field1max Now all that's left is to filter | where field1=field1max Since we're operating on our initial result we've retained all original fields. Of course for for additional performance boost you can remove unnecessary fields prior to sorting so you don't needlessly drag them around just to get rid of them immediately after if you have a big data set to sort.(Same goes for limiting your processed data volume in with eventstats-based solution)
Yes, those volumes of data seem off. While in csv case you could argue that it's a gzipped file size (but it still looks a bit low - with typical gzip compression you expect around 1:10 ratio) the KV... See more...
Yes, those volumes of data seem off. While in csv case you could argue that it's a gzipped file size (but it still looks a bit low - with typical gzip compression you expect around 1:10 ratio) the KV size is way too small.
I'm not aware of any built-in mechanism that allows you to do so. Maybe some external EDR solution captures that but I can't advise any.
Hi Mario, Thanks for the update. Since its not creating updated values, we have applied formula at power automate itself. Now issues is  "type": "string", "value": "abcd@ef.gh.com"  v... See more...
Hi Mario, Thanks for the update. Since its not creating updated values, we have applied formula at power automate itself. Now issues is  "type": "string", "value": "abcd@ef.gh.com"  value ""abcd@ef.gh.com""not poping up  under AppD schem , though the value is getting parsed. I have initiated as string and earlier it worked , once i have deleted schema and created again, value showing as null
Already install TA - Windows and restart in UF also installed it in Indexer but why i still cannot read the output ? did i forget to setting something ? 
Hmm so if one of endpoint got hacked and someone doing running script we cannot collect information from output in cmd/powershell ?
I am using same index for both stats disctinctcount and timechart distinctcount. But the results from timechart is always high. Anyone knows the reason behind it and how to resolve this? Also i have ... See more...
I am using same index for both stats disctinctcount and timechart distinctcount. But the results from timechart is always high. Anyone knows the reason behind it and how to resolve this? Also i have tried with bucket span and plotted the time chart. But the results were not matching with stats distinctcount. Please help.