All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I'm not sure but only a tiny fraction of a % of messages seem to be affected. Our Splunk team haven't been able to help. 
It shows out of memory in the log - this could be caused by large volumes of data coming in from 0365 events. You might consider changing the interval in the inputs for the collection. (I don’t know... See more...
It shows out of memory in the log - this could be caused by large volumes of data coming in from 0365 events. You might consider changing the interval in the inputs for the collection. (I don’t know if this will fix it, but may help with the different inputs you may have, sounds like its bottlenecked somewhere ) Check the memory usage on the where this add-on is running (normally on a HF)  - perhaps you need to increase this if it’s very low. Have a look at the troubleshooting guide, there may items there to help further investigate. https://docs.splunk.com/Documentation/AddOns/released/MSO365/Troubleshooting
Thank you so much, i just find out that it is all about search at any time you receive and index to create an alert you should make a research on this index and the specific request that the user wan... See more...
Thank you so much, i just find out that it is all about search at any time you receive and index to create an alert you should make a research on this index and the specific request that the user want you detect inside of this index. example: redshift/  consecutive login failed 
Hi, We have stopped getting o365 logs when looked for the errors I see the below error. Does it mean client secret is expired? level=ERROR pid=22156 tid=MainThread logger=splunk_ta_o365.modinputs.ma... See more...
Hi, We have stopped getting o365 logs when looked for the errors I see the below error. Does it mean client secret is expired? level=ERROR pid=22156 tid=MainThread logger=splunk_ta_o365.modinputs.management_activity pos=utils.py:wrapper:72 | datainput=b'xoar_Management_Exchange' start_time=1715152233 | message="Data input was interrupted by an unhandled exception." Traceback (most recent call last): File "/opt/splunk/etc/apps/splunk_ta_o365/lib/splunksdc/utils.py", line 70, in wrapper return func(*args, **kwargs) File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/modinputs/management_activity.py", line 135, in run executor.run(adapter) File "/opt/splunk/etc/apps/splunk_ta_o365/lib/splunksdc/batch.py", line 54, in run for jobs in delegate.discover(): File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/modinputs/management_activity.py", line 225, in discover self._clear_expired_markers() File "/opt/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/modinputs/management_activity.py", line 294, in _clear_expired_markers checkpoint.sweep() File "/opt/splunk/etc/apps/splunk_ta_o365/lib/splunksdc/checkpoint.py", line 86, in sweep return self._store.sweep() File "/opt/splunk/etc/apps/splunk_ta_o365/lib/splunksdc/checkpoint.py", line 258, in sweep indexes = self.build_indexes(fp) File "/opt/splunk/etc/apps/splunk_ta_o365/lib/splunksdc/checkpoint.py", line 189, in build_indexes indexes[key] = pos File "/opt/splunk/etc/apps/splunk_ta_o365/lib/sortedcontainers/sorteddict.py", line 300, in __setitem__ dict.__setitem__(self, key, value) MemoryError
@gcusello Thank you so much
In what way are they inconsistent? (The totals are most likely different due to the rounding)
Interesting... Is it a different result every time you run it or at least the same different results for the same input?
Can I get any other suggestion on trouble shooting this issue?
Hello i am try to deploy wordpress + PHP agent in Docker using dockerfile. regarding this articel: https://docs.appdynamics.com/appd/24.x/latest/en/application-monitoring/install-app-server-agents... See more...
Hello i am try to deploy wordpress + PHP agent in Docker using dockerfile. regarding this articel: https://docs.appdynamics.com/appd/24.x/latest/en/application-monitoring/install-app-server-agents/php-agent/php-agent-configuration-settings/node-reuse-for-php-agent i already config the appdynamics agent.in following the conformation that refer on that link. this my dockerfile: FROM wordpress:php7.4 # Install required dependencies RUN apt-get update && apt-get install -y wget tar && \ apt-get clean && rm -rf /var/lib/apt/lists/* # Copy phpinfo COPY phpinfo.php /var/www/html # Download and extract AppDynamics PHP Agent – use this to download agent from AppDynamics Download Portal WORKDIR /var/www/html # Copy downloaded AppDynamics PHP Agent - use this if the agent is already downloaded in the docker working dir RUN mkdir -p /opt/appdynamics COPY appdynamics-php-agent-linux_x64 /opt/appdynamics/ RUN chmod -R a+w /opt/appdynamics/ # Install AppDynamics PHP Agent RUN cd /opt/appdynamics/appdynamics-php-agent-linux_x64/ && ./install.sh -s -a=abcde@abcde -e /usr/local/lib/php/extensions/no-debug-non-zts-20190902 -i /usr/local/etc/php/conf.d -p /usr/bin -v 7.4 abcde.saas.appdynamics.com 443 WordPress-Docker Bakcend-Tier Backend-Node # Expose port 80 EXPOSE 80 my goal is my agent can use container name,hostname,hostid,prefix or what ever in automaticaly using reuseNode feature insted of manualy fill the Node name in every instalation PHP agent, can we do that? because in Nodejs agent we can do that even my application running on Docker.
I have to admit, I did suspect that the issue might be with the 'join'.  So... I have to go back to my original question. How to I run the subsearch mentioned earlier to see the data from indexed cs... See more...
I have to admit, I did suspect that the issue might be with the 'join'.  So... I have to go back to my original question. How to I run the subsearch mentioned earlier to see the data from indexed csv? Running the below gives me nothing. Am I missing some obvious characters/words in this to run on itself and not as subsearch? search earliest=-24h host="AAA" index="BBB" sourcetype="CCC" | eval dateFile=strftime(now(), "%Y-%m-%d") | where like(source,"%".dateFile."%XXX.csv") | rename "Target Number" as Y_Field | eval Y_Field=lower(Y_Field) | fields Y_Field, Field, "Field 2", "Field 3"  
Thanks @yuanliu i understand it now, im able to get the id for all the knowledge objects owned by the user now However im still not able to change the owner for the knowledge object via the rest com... See more...
Thanks @yuanliu i understand it now, im able to get the id for all the knowledge objects owned by the user now However im still not able to change the owner for the knowledge object via the rest command, i get the following error " <msg type="ERROR">You do not have permission to share objects at the system level</msg> </messages> " My user account has the sc_admin role so permission should not be an issue, am i missing something ? Any help is really appreciated
Hi, I'm new to Splunk, so I apologize if this question seems naive. While experimenting with calculated fields, I found some inconsistent results. Consequently, I removed these fields and tested dir... See more...
Hi, I'm new to Splunk, so I apologize if this question seems naive. While experimenting with calculated fields, I found some inconsistent results. Consequently, I removed these fields and tested directly in the search. I'm aware that the syntax I'm using here with eval is not the one specified in the documentation, but I'm using it to simulate the calculated field (and it yields the same results). I've seen this use of eval elsewhere but only for very simple things. When I run: stats sum(eval((bytes/(1024*1024)))) as MB , it works. However, when I run stats sum(eval(round(bytes/(1024*1024),2))) as MB I get results, but they are totally inconsistent. What could be happening? Where is my mistake? (Note that I'm not looking for the correct solution - I already have it - but I want to understand why this syntax doesn't work.) Thanks.
Hi ,   I am trying to achieve an automation whereon i will be running a query and then passing the IP's  which i need to send to akamai via POST API. I know, edgegridauth library can be used to ach... See more...
Hi ,   I am trying to achieve an automation whereon i will be running a query and then passing the IP's  which i need to send to akamai via POST API. I know, edgegridauth library can be used to achieve the same but got stuck on how the action would be configured . Can someone help.    
I am getting the Duplicate events in Splunk from Aws cloud watch and I am sending data from only one source to the Splunk . How do I resolve it.
Let me first point out that you can only determine if a group of pods as denoted in pod_name_lookup is completely absent (missing), not any individual pod_name.  As such, your "timechart" can only ha... See more...
Let me first point out that you can only determine if a group of pods as denoted in pod_name_lookup is completely absent (missing), not any individual pod_name.  As such, your "timechart" can only have values 1 and 0 for each missing pod_name_lookup.  Second, I want to note that calculations to fill null importance values is irrelevant to the problem in hand, therefore I will ignore them. The way to think through a solution is as follows: You want to populate a field that contains all non-critical pod_name_lookup values in every event so you can compare with running ones in each time interval. (Hint: eventstats.)  In other words, if you have these pods _time pod_name sourcetype 2024-05-08 01:42:10 apache-12 kubectl 2024-05-08 01:41:58 apache-2 kubectl 2024-05-08 01:41:46 kakfa-8 kubectl 2024-05-08 01:41:00 apache-13 kubectl 2024-05-08 01:40:52 someapp-6 kubectl 2024-05-08 01:39:40 grafana-backup-11 kubectl 2024-05-08 01:39:34 apache-4 kubectl 2024-05-08 01:39:32 kafka-6 kubectl 2024-05-08 01:39:26 someapp-2 kubectl 2024-05-08 01:38:16 apache-12 kubectl 2024-05-08 01:38:10 grafana-backup-6 kubectl and pod_list lookup contains the following importance namespace pod_name_lookup critical ns1 kafka-* critical ns1 apache-* non-critical ns2 grafana-backup-* non-critical ns2 someapp-* (As you can see, I added "someapp-*" because in your illustration, only one app is "non-critical".  This makes data nontrivial.) You will want to produce an intermediate table like this (please ignore the time interval differences just focus on material fields): _time pod_name_lookup pod_name_all 2024-05-08 01:35:00     2024-05-08 01:36:00 apache-* grafana-backup-* grafana-backup-* someapp-* 2024-05-08 01:37:00 kafka-* someapp-* grafana-backup-* someapp-* 2024-05-08 01:38:00 apache-* grafana-backup-* grafana-backup-* someapp-* 2024-05-08 01:39:00 apache-* someapp-* grafana-backup-* someapp-* 2024-05-08 01:40:00 apache-* kakfa-* grafana-backup-* someapp-* (This illustration assumes that you are looking for missing pods in each calendar minute; I know this is ridiculous, but it is easier to emulate.)  From this table, you can calculate which value(s) in pod_name_all is/are missing from pod_name_lookup. (Hint: mvmap can be an easy method.) In SPL, this thought process can be implemented as   index=abc sourcetype=kubectl importance=non-critical | lookup pod_list pod_name_lookup as pod_name OUTPUT pod_name_lookup | dedup pod_name | append [inputlookup pod_list where importance = non-critical | rename pod_name_lookup as pod_name_all] | eventstats values(pod_name_all) as pod_name_all | where sourcetype == "kubectl" | timechart span=1h@h values(pod_name_lookup) as pod_name_lookup values(pod_name_all) as pod_name_all | eval missing = mvappend(missing, mvmap(pod_name_all, if(pod_name_all IN (pod_name_lookup), null(), pod_name_all))) | where isnotnull(missing) | timechart span=1h@h count by missing   In the above, I changed time bucket to 1h@h  (as opposed to 1m@m used in illustrations).  You need to change that to whatever suits your needs. Here is an emulation used to produce the above tables and this chart:   | makeresults format=csv data="_time, pod_name 10,apache-12 22,apache-2 34,kakfa-8 80,apache-13 88,someapp-6 160,grafana-backup-11 166,apache-4 168,kafka-6 174,someapp-2 244,apache-12 250,grafana-backup-6" | eval _time = now() - _time | eval sourcetype = "kubectl", importance = "non-critical" | eval pod_name_lookup = replace(pod_name, "\d+", "*") ``` the above emulates index=abc sourcetype=kubectl importance=non-critical | lookup pod_list pod_name_lookup as pod_name OUTPUT pod_name_lookup | dedup pod_name ``` | append [makeresults format=csv data="namespace, pod_name_lookup, importance ns1, kafka-*, critical ns1, apache-*, critical ns2, grafana-backup-*, non-critical ns2, someapp-*, non-critical" | where importance = "non-critical" ``` subsearch thus far emulates | inputlookup pod_list where importance = non-critical ``` | rename pod_name_lookup as pod_name_all] | eventstats values(pod_name_all) as pod_name_all | where sourcetype == "kubectl" | timechart span=1m@m values(pod_name_lookup) as pod_name_lookup values(pod_name_all) as pod_name_all | eval missing = mvappend(missing, mvmap(pod_name_all, if(pod_name_all IN (pod_name_lookup), null(), pod_name_all))) | where isnotnull(missing) | timechart span=1m@m count by missing    
Hello, thanks for replying, checked the permission and disabled AV, still the same outcome. Any other ideas? Best regards Alex
Hi, yes I was able to get past this issue,   I edited the JDBC URL and added below additional KV pairs jdbc:sqlserver://IP:Port;databaseName=dbname;selectMethod=cursor;encrypt=false;trustServerCer... See more...
Hi, yes I was able to get past this issue,   I edited the JDBC URL and added below additional KV pairs jdbc:sqlserver://IP:Port;databaseName=dbname;selectMethod=cursor;encrypt=false;trustServerCertificate=true   Hope this helps
What you can search for depends on your data. If you have properly onboarded data, you should have your events ingested with a well-defined sourcetype and have your fields extracted. Otherwise Splunk... See more...
What you can search for depends on your data. If you have properly onboarded data, you should have your events ingested with a well-defined sourcetype and have your fields extracted. Otherwise Splunk might simply not know what you mean by "src_addr" or "dest_addr". Even better if you have your data CIM-compliant - then you can search from datamodel using just standardized fields regardless of the actual fields contained within the original raw event. But that's a bit more advanced topic. The first thing would be to verify what fields you actually have available. Try running index=firewall host=your_firewall | head 10 in verbose mode and expand a single event to see what fields are extracted. If your fields are called - for example - src_ip and dest_ip, searching for src_addr and dest_addr will yield no results because Splunk doesn't know those fields.
| eval row=mvrangee(0,count) | mvexpand row | fields - row | eval count=1
Try to post code snippets in either a preformatted paragraph or a code block - it helps reability. But to the point - the BREAK_ONLY_BEFORE setting is only applied when SHOULD_LINEMERGE is set to tr... See more...
Try to post code snippets in either a preformatted paragraph or a code block - it helps reability. But to the point - the BREAK_ONLY_BEFORE setting is only applied when SHOULD_LINEMERGE is set to true (which generally should be avoided whenever possible). To split your input into events containing both the timestamp and the command you'd need to adjust your LINE_BREAKER to not just treat every line as separate event but to break the input stream at new lines followed immediately by a hash and a timestamp. It would probably be something like LINE_BREAKER=([\r\n]+)#\d+