All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello @landzaat could you provide screenshot of whole screen showing this error? Thanks. For your information schedule PDF delivery may create ScheduledView like this and maybe there is limit in nam... See more...
Hello @landzaat could you provide screenshot of whole screen showing this error? Thanks. For your information schedule PDF delivery may create ScheduledView like this and maybe there is limit in name size :       * If this helps, please upvote or accept solution *
Thanks @isoutamo . This works as expected and only thing is not grouping the the user_id but rather it's grouping by timeformat/_time every 1 hr.   is it possible to group by user_id? curren... See more...
Thanks @isoutamo . This works as expected and only thing is not grouping the the user_id but rather it's grouping by timeformat/_time every 1 hr.   is it possible to group by user_id? current spl: base search | rex user_id | eval takeIn = case (_time>=relative_time(now(),"@d") ,"take", _time<=relative_time(now(), "-1d"), "take", true(), "drop") | where takeIn = "take" | timechart span=1h count | timewrap d series=short | fields _time s1 s0 | rename s1 as today, s0 as yesterday | where today !=""
When trying to schedule a PDF delivery for a dashboard, the error message Parameter "name" must be 100 characters or less is displayed. The dashboard runs fine, export PDF has no issues. Where do I f... See more...
When trying to schedule a PDF delivery for a dashboard, the error message Parameter "name" must be 100 characters or less is displayed. The dashboard runs fine, export PDF has no issues. Where do I find this "name" parameter?
Hi @gcusello  Getting error  
Hello. We have made it work. This is the stanza we have configured in transforms.conf on the heavy forwarder: [setindexHIGH] SOURCE_KEY = field:topic REGEX = audits DEST_KEY = _MetaData:Index FOR... See more...
Hello. We have made it work. This is the stanza we have configured in transforms.conf on the heavy forwarder: [setindexHIGH] SOURCE_KEY = field:topic REGEX = audits DEST_KEY = _MetaData:Index FORMAT = imp_high Thanks for your help.
Hi @parthiban, ok, you could use this approach: create an alert that doesn't send an email running this search: index= "XXXXX" "Genesys system is available" | spath input=_raw output=new_field pat... See more...
Hi @parthiban, ok, you could use this approach: create an alert that doesn't send an email running this search: index= "XXXXX" "Genesys system is available" | spath input=_raw output=new_field path=response_details.response_payload.entities{} | mvexpand new_field | fields new_field | spath input=new_field output=serialNumber path=serialNumber | spath input=new_field output=onlineStatus path=onlineStatus | where serialNumber!="" | lookup Genesys_Monitoring.csv serialNumber | where Country="Bangladesh" | stats count(eval(onlineStatus="offline")) AS offline_count count(eval(onlineStatus="online")) AS online_count earliest(eval(if(onlineStatus="offline",_time,""))) AS offline_time earliest(eval(if(onlineStatus="online",_time,""))) AS online_time | fillnull value=0 offline_count | fillnull value=0 online_count | eval condition=case( offline_count=0 AND online_count>0,"Online", offline_count>0 AND online_count=0,"Offline", offline_count>0 AND online_count>0 AND online>offline, "Offline but newly online", offline_count>0 AND online_count>0 AND offline>online, "Offline", offline_count=0 AND online_count=0, "No data"), search="Device went offline and recovery status" | search condition="Offline" OR condition="Online" OR condition="Offline but newly online" | table search condition | collect index=summary then you can run a search like the following: index=summary search="Device went offline and recovery status" | stats dc(condition) AS condition_count last(condition) AS condition_last values(condition) AS condition | search (condition_last="Offline" condition_count=1) OR (condition_last="Online" condition_count>1) with an email action to informa yo that there's a new offline or there's an online adter te offline. Please check the conditions because I cannot do, but they should be correct. Ciao. Giuseppe
Hi, The AppD ansible collection for machine agent has an issue where if you want to change the values of tier, application, or node_name but they already have values in the conf file, you cannot cha... See more...
Hi, The AppD ansible collection for machine agent has an issue where if you want to change the values of tier, application, or node_name but they already have values in the conf file, you cannot change them without first uninstalling and then re-installing the agent. I can't give a link to the git repo, because the Ansible collection does not expose which git repo the collection was synced from. The collection page is https://galaxy.ansible.com/ui/repo/published/appdynamics/agents/, which also contains a tarball of the code. This specific code is both: roles/java/tasks/merging-controller-info.yml (starting line 98) roles/machine/tasks/merging-controller-info.yml (starting line 108) I can submit a PR for this if you point me to the git repo for it, or I would request that it either. Any suggestions or a way through?
@PickleRick , My aim is to save the license. Can you assist me in blacklisting some of the most common Windows security events
Hi @richgalloway    Can you please help me on this above requirement?
Hi @gcusello  I understood your first point. However, in our case, we don't want that type of requirement. Once we identify the offline message, we don't want to receive repeated alerts because we h... See more...
Hi @gcusello  I understood your first point. However, in our case, we don't want that type of requirement. Once we identify the offline message, we don't want to receive repeated alerts because we have already created a ticket for that incident. We only need to be notified once the device becomes online. Throttling won't work for our requirement I believe, as we already have a similar alert mechanism in our observability tools. We aim to implement the same in Splunk. For better understanding, I've provided an example.   Please guide me anyway to achieve this requirement. 1st search  - OFFLINE  - alert will trigger 2nd search - OFFLINE  - alert suppressed 3rd search  - OFFLINE  - alert suppressed 4th search  - ONLINE    - alert will trigger 5th search  - ONLINE    - alert suppressed 6th search  - OFFLINE - alert will trigger 7th search  - OFFLINE  - alert suppressed 8th search  - OFFLINE  - alert suppressed 9th search  - ONLINE  - alert will trigger 10th search  - ONLINE  - alert suppressed . . etc
I don't think any of those addons contains such functionality. You would need to explicitly gather such info with scripted input utilizing probably dmidecode in Linux case (which is an external tool ... See more...
I don't think any of those addons contains such functionality. You would need to explicitly gather such info with scripted input utilizing probably dmidecode in Linux case (which is an external tool and doesn't have to be installed on your system) and some other tool on Windows (I'm not even sure there is a standardized way of doing that with vanilla Windows installation).
Also please note (it's worth mentioning because that's not obvious) that if you aggregate some values into several multivalued fields (like in your case - multivalued VM field and col2 field) the con... See more...
Also please note (it's worth mentioning because that's not obvious) that if you aggregate some values into several multivalued fields (like in your case - multivalued VM field and col2 field) the contents of those multivalued fields are from now on independent on each other. So you can't for example - sort one using the order of the other. Again - it's not a spreadsheet.
I'm still not sure what are the source datasets and what should be the result. I see some attempts at solving this riddle in the thread but I'm not 100% sure we're all on the same page regarding what... See more...
I'm still not sure what are the source datasets and what should be the result. I see some attempts at solving this riddle in the thread but I'm not 100% sure we're all on the same page regarding what we're working with and what we want to achieve in the end. Could you please post samples of your data and what the result should look like?
While I didn't do comparison tests myself, the general consensus is that XML-rendered windows logs are the better choice. They do not cause problems with parsing (there were some problems with ambigu... See more...
While I didn't do comparison tests myself, the general consensus is that XML-rendered windows logs are the better choice. They do not cause problems with parsing (there were some problems with ambiguous data in the traditionally formated data I recall vaguely; probably more experienced with older versions colleagues could tell you more). Also they tend to be actually smaller than traditionally formatted logs.
If enabled, acknowledgements are returned within the connection established from the forwarder downstream (to an intermediate forwarder or directly to an indexer). There is no need for another connec... See more...
If enabled, acknowledgements are returned within the connection established from the forwarder downstream (to an intermediate forwarder or directly to an indexer). There is no need for another connection.  
Based on your latest update, the problem should be restated as: remove events with requestId that has a corresponding ValidationErrors value of "Invalid product". (I assume that the trailing space in... See more...
Based on your latest update, the problem should be restated as: remove events with requestId that has a corresponding ValidationErrors value of "Invalid product". (I assume that the trailing space in sample data is a typo.) Is this correct? In the format illustrated in sample data, Splunk should have given you compliant JSON in ValidationErrors.  Process this first, then literally implement the restated objective.   | spath input=response | stats values(*) as * by sessionId request requestId responseStatus | where NOT ValidationErrors == "Invalid product"   Your sample data will leave you with sessionId request requestId responseStatus DeveloperMessage DocumentationUrl ErrorCode LogId Parameters UserMessage ValidationErrors 855762c0-9a6b PUT bc819b42-6655 422               This is the emulation used to test the method:   | makeresults | fields - _time | eval data = mvappend("IBroker call failed, sessionId=855762c0-9a6b, requestId=bc819b42-6646, request=PUT responseStatus=422 response={\"ErrorCode\":0,\"UserMessage\":null,\"DeveloperMessage\":null,\"DocumentationUrl\":null,\"LogId\":null,\"ValidationErrors\":\"Invalid product\",\"Parameters\":null}", "sessionId=855762c0-9a6b, requestId=bc819b42-6646, request=PUT responseStatus=422 ErrorMessage: unprocessable", "sessionId=855762c0-9a6b, requestId=bc819b42-6655, request=PUT responseStatus=422 ErrorMessage: unprocessable") | mvexpand data | rename data AS _raw | extract ``` data emulation above ```  
First, it seems to me that (master!="yoda" AND master!="mace" AND master="Jinn") and master="Jinn" are semantically identical.  Is this correct? (I'm unfamiliar with the Jedi lore.)  I'll assume it t... See more...
First, it seems to me that (master!="yoda" AND master!="mace" AND master="Jinn") and master="Jinn" are semantically identical.  Is this correct? (I'm unfamiliar with the Jedi lore.)  I'll assume it to be true in the following. Second, what is preventing you from doing, for example index=sith broker sithlord!=darth_maul OR index=jedi domain="jedi.lightside.com" master="Jinn" | eval name=coalesce(Jname, Sname) | stats values(name) as names by saber_color strengths | where mvcount(names)=1 or even index=sith broker sithlord!=darth_maul OR index=jedi domain="jedi.lightside.com" master="Jinn" | eval name=coalesce(Jname, Sname) | stats values(*) as * by saber_color strengths | where mvcount(names)=1 This way, you will have all columns preserved. Third, could you explain "unable to utilize the index drill down for each in the search otherwise the query is 75% white noise?"  Are you trying to use "Automatic" in drilldown action?  Anything "automatic" is really Splunk's guess.  If you have something specific in mind. you will want to write custom drilldown instead.
Hi, we also experience this issue, where we observed the initial mail received by user will be display next 24 hours after user received. Need help if there any mitigation for almost real time also c... See more...
Hi, we also experience this issue, where we observed the initial mail received by user will be display next 24 hours after user received. Need help if there any mitigation for almost real time also can be good. 
Might be that there is another issue indeed. Keep us posted if there is something potentially hiting other users as well going on.
Hi @parthiban, you have two solutions: define a throttle time, so if the device isn't came back online after the throttle period, you have a remeber that the device is offline, save the offline a... See more...
Hi @parthiban, you have two solutions: define a throttle time, so if the device isn't came back online after the throttle period, you have a remeber that the device is offline, save the offline and online events in a summary index and use it to check the condition. The first is an easier solution, that could also be interesting to be sure not forgetting the status. The second is just a little more complicated. Ciao. Giuseppe