All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I believe you have to use the full name of the field ("entry.name", for example).
CIM-compliance is different and has nothing to do with whether a field can be used in the tstats command.  CIM-compliance means a field has a name and value described in the CIM manual (https://docs.... See more...
CIM-compliance is different and has nothing to do with whether a field can be used in the tstats command.  CIM-compliance means a field has a name and value described in the CIM manual (https://docs.splunk.com/Documentation/CIM/5.3.2/User/Howtousethesereferencetables). The only fields the can be used in tstats are those created at index-time or those in an accelerated datamodel.
Please give me examples of agentless and agent- based onboarding in splunk
While sending a rest api request to change the owner of a knowledge object i am getting the following error "You do not have permission to share objects at the system level" even though the user has ... See more...
While sending a rest api request to change the owner of a knowledge object i am getting the following error "You do not have permission to share objects at the system level" even though the user has "sc_admin" role. Is there any specific capability that is missing that is needed for this ?
Rounding errors? When you're doing stats sum(eval(round(bytes/(1024*1024),2))) as MB You lose some part of the value since you're "cutting off" the part after two decimal digits. So the error is ... See more...
Rounding errors? When you're doing stats sum(eval(round(bytes/(1024*1024),2))) as MB You lose some part of the value since you're "cutting off" the part after two decimal digits. So the error is expected.
OK. Start by cutting the search to the initial search and see if the results are what you expect them to be. In other words - check if search earliest=-24h host="AAA" index="BBB" sourcetype="CCC" ... See more...
OK. Start by cutting the search to the initial search and see if the results are what you expect them to be. In other words - check if search earliest=-24h host="AAA" index="BBB" sourcetype="CCC" returns any results at all. If not - it means you have problems on the ingestion end - you have no events at all to search from (or maybe you're looking for wrong data). And then add one step after another until the results stop being in line with what you expect them to be. This will be the step that is wrong.
Splunk on its own doesn't have the zOS component so your data has to be going through some external stages before it reaches Splunk. We don't know what your ingestion process looks like. If the eve... See more...
Splunk on its own doesn't have the zOS component so your data has to be going through some external stages before it reaches Splunk. We don't know what your ingestion process looks like. If the events are written by some solution to an intermediate file picked up later by forwarder - check the file contents and see if those \xXX codes are there. If the events are pushed by syslog - sniff the traffic with tcpdump and see if they are there. Most probably the response to one of those questions (or a similar one regarding your particular transport channel) will be affirmative. And that will mean that the issue is external to Splunk  - you're ingesting badly formatted data.
Sounds like a data problem - you need to do some further analysis on the commonalities amongst the failing messages and thee differences to the successful messages, not just in the text, but how the ... See more...
Sounds like a data problem - you need to do some further analysis on the commonalities amongst the failing messages and thee differences to the successful messages, not just in the text, but how the messages are produced, where are they produced, how are they stored, when are they produced, etc.
In Splunk you need to configure alert actions, as you can see many come out of the box for your use case, you have a few options that you can explore.   1. Use this Add-on - it may help with some ... See more...
In Splunk you need to configure alert actions, as you can see many come out of the box for your use case, you have a few options that you can explore.   1. Use this Add-on - it may help with some config/testing so needs to be installed - https://splunkbase.splunk.com/app/5520  2. Develop your own Action - https://dev.splunk.com/enterprise/docs/devtools/customalertactions/   
I'm not sure but only a tiny fraction of a % of messages seem to be affected. Our Splunk team haven't been able to help. 
It shows out of memory in the log - this could be caused by large volumes of data coming in from 0365 events. You might consider changing the interval in the inputs for the collection. (I don’t know... See more...
It shows out of memory in the log - this could be caused by large volumes of data coming in from 0365 events. You might consider changing the interval in the inputs for the collection. (I don’t know if this will fix it, but may help with the different inputs you may have, sounds like its bottlenecked somewhere ) Check the memory usage on the where this add-on is running (normally on a HF)  - perhaps you need to increase this if it’s very low. Have a look at the troubleshooting guide, there may items there to help further investigate. https://docs.splunk.com/Documentation/AddOns/released/MSO365/Troubleshooting
Thank you so much, i just find out that it is all about search at any time you receive and index to create an alert you should make a research on this index and the specific request that the user wan... See more...
Thank you so much, i just find out that it is all about search at any time you receive and index to create an alert you should make a research on this index and the specific request that the user want you detect inside of this index. example: redshift/  consecutive login failed 
Hi, We have stopped getting o365 logs when looked for the errors I see the below error. Does it mean client secret is expired? level=ERROR pid=22156 tid=MainThread logger=splunk_ta_o365.modinputs.ma... See more...
Hi, We have stopped getting o365 logs when looked for the errors I see the below error. Does it mean client secret is expired? level=ERROR pid=22156 tid=MainThread logger=splunk_ta_o365.modinputs.management_activity pos=utils.py:wrapper:72 | datainput=b'xoar_Management_Exchange' start_time=1715152233 | message="Data input was interrupted by an unhandled exception." Traceback (most recent call last): File "/opt/splunk/etc/apps/splunk_ta_o365/lib/splunksdc/utils.py", line 70, in wrapper return func(*args, **kwargs) File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/modinputs/management_activity.py", line 135, in run executor.run(adapter) File "/opt/splunk/etc/apps/splunk_ta_o365/lib/splunksdc/batch.py", line 54, in run for jobs in delegate.discover(): File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/modinputs/management_activity.py", line 225, in discover self._clear_expired_markers() File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/modinputs/management_activity.py", line 294, in _clear_expired_markers checkpoint.sweep() File "/opt/splunk/etc/apps/splunk_ta_o365/lib/splunksdc/checkpoint.py", line 86, in sweep return self._store.sweep() File "/opt/splunk/etc/apps/splunk_ta_o365/lib/splunksdc/checkpoint.py", line 258, in sweep indexes = self.build_indexes(fp) File "/opt/splunk/etc/apps/splunk_ta_o365/lib/splunksdc/checkpoint.py", line 189, in build_indexes indexes[key] = pos File "/opt/splunk/etc/apps/splunk_ta_o365/lib/sortedcontainers/sorteddict.py", line 300, in __setitem__ dict.__setitem__(self, key, value) MemoryError
@gcusello Thank you so much
In what way are they inconsistent? (The totals are most likely different due to the rounding)
Interesting... Is it a different result every time you run it or at least the same different results for the same input?
Can I get any other suggestion on trouble shooting this issue?
Hello i am try to deploy wordpress + PHP agent in Docker using dockerfile. regarding this articel: https://docs.appdynamics.com/appd/24.x/latest/en/application-monitoring/install-app-server-agents... See more...
Hello i am try to deploy wordpress + PHP agent in Docker using dockerfile. regarding this articel: https://docs.appdynamics.com/appd/24.x/latest/en/application-monitoring/install-app-server-agents/php-agent/php-agent-configuration-settings/node-reuse-for-php-agent i already config the appdynamics agent.in following the conformation that refer on that link. this my dockerfile: FROM wordpress:php7.4 # Install required dependencies RUN apt-get update && apt-get install -y wget tar && \ apt-get clean && rm -rf /var/lib/apt/lists/* # Copy phpinfo COPY phpinfo.php /var/www/html # Download and extract AppDynamics PHP Agent – use this to download agent from AppDynamics Download Portal WORKDIR /var/www/html # Copy downloaded AppDynamics PHP Agent - use this if the agent is already downloaded in the docker working dir RUN mkdir -p /opt/appdynamics COPY appdynamics-php-agent-linux_x64 /opt/appdynamics/ RUN chmod -R a+w /opt/appdynamics/ # Install AppDynamics PHP Agent RUN cd /opt/appdynamics/appdynamics-php-agent-linux_x64/ && ./install.sh -s -a=abcde@abcde -e /usr/local/lib/php/extensions/no-debug-non-zts-20190902 -i /usr/local/etc/php/conf.d -p /usr/bin -v 7.4 abcde.saas.appdynamics.com 443 WordPress-Docker Bakcend-Tier Backend-Node # Expose port 80 EXPOSE 80 my goal is my agent can use container name,hostname,hostid,prefix or what ever in automaticaly using reuseNode feature insted of manualy fill the Node name in every instalation PHP agent, can we do that? because in Nodejs agent we can do that even my application running on Docker.
I have to admit, I did suspect that the issue might be with the 'join'.  So... I have to go back to my original question. How to I run the subsearch mentioned earlier to see the data from indexed cs... See more...
I have to admit, I did suspect that the issue might be with the 'join'.  So... I have to go back to my original question. How to I run the subsearch mentioned earlier to see the data from indexed csv? Running the below gives me nothing. Am I missing some obvious characters/words in this to run on itself and not as subsearch? search earliest=-24h host="AAA" index="BBB" sourcetype="CCC" | eval dateFile=strftime(now(), "%Y-%m-%d") | where like(source,"%".dateFile."%XXX.csv") | rename "Target Number" as Y_Field | eval Y_Field=lower(Y_Field) | fields Y_Field, Field, "Field 2", "Field 3"  
Thanks @yuanliu i understand it now, im able to get the id for all the knowledge objects owned by the user now However im still not able to change the owner for the knowledge object via the rest com... See more...
Thanks @yuanliu i understand it now, im able to get the id for all the knowledge objects owned by the user now However im still not able to change the owner for the knowledge object via the rest command, i get the following error " <msg type="ERROR">You do not have permission to share objects at the system level</msg> </messages> " My user account has the sc_admin role so permission should not be an issue, am i missing something ? Any help is really appreciated