All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I see your logic, my bad. I'm trying to identify the start of the sequence as well even thought there is no increment based on the previous row.
What were you expecting for the first id if there is no previous row?
Hello,  Thank you for the answer. Indeed trying a range with a windows of 2 spawns results. However, I'm not picking up on the first start of the sequence (ID 0 and ID 1)  but only the last 4 IDs ( ... See more...
Hello,  Thank you for the answer. Indeed trying a range with a windows of 2 spawns results. However, I'm not picking up on the first start of the sequence (ID 0 and ID 1)  but only the last 4 IDs ( 2/3/4/5) Any ideas ?    
Assuming ID is a numeric, your solution should work. You could also try range with window of 2. Here is a runanywhere example demonstrating both techniques | makeresults format=csv data="event,id A,... See more...
Assuming ID is a numeric, your solution should work. You could also try range with window of 2. Here is a runanywhere example demonstrating both techniques | makeresults format=csv data="event,id A,1 B,2 C,4 D,5" | streamstats range(id) as range window=2 | streamstats current=f last(id) as prev_id | eval increment=id-prev_id
Hi I always teach my users that when they change log format they must also rename its sourcetype. e.g. adding a number after it. My standard is name source types like "vendor:system:log-file:increme... See more...
Hi I always teach my users that when they change log format they must also rename its sourcetype. e.g. adding a number after it. My standard is name source types like "vendor:system:log-file:incremental number starting from 0". That way it' s easy just to add 1 to this #. r. Ismo
Hi I suppose that there is no need to select so big instance for working as a HF on AWS. I have used instances with 2-4vCPU without any real issue. IMHO it's better to use couple of smaller instance... See more...
Hi I suppose that there is no need to select so big instance for working as a HF on AWS. I have used instances with 2-4vCPU without any real issue. IMHO it's better to use couple of smaller instances as LB configuration than one huge. Of course if you have some apps like TA-aws / DBX running on those HFs then you must have bigger one and also add some pipelines there too.  Of course you must monitor those and if there are lack of resources or too much delays with event forwarding then add more HFs or increase their size. Easiest this can do with adding those as indexers into MC and us MC to analysing those. r. Ismo
Hi I suppose it. Just create a ticket to Splunk support and ask that they split it based on your given amounts. Probably you must have access to those entitlements on your support portal or you must... See more...
Hi I suppose it. Just create a ticket to Splunk support and ask that they split it based on your given amounts. Probably you must have access to those entitlements on your support portal or you must ask that person who are named your contract contact person will ask it from Splunk. r. Ismo
Hi guys ,  I just install misp42 app in my splunk , and add misp instance to splunk , it work    But i want compare from : index=firewall srcip=10.x.x.x , it my log from firewall , so i want... See more...
Hi guys ,  I just install misp42 app in my splunk , and add misp instance to splunk , it work    But i want compare from : index=firewall srcip=10.x.x.x , it my log from firewall , so i want compare dstip with ip-dst from misp to detect unusual access activities  , like when dstip=ip-dst : 152.67.251.30 , how can i search this  , misp_instance=IP_Block field=value , i just try some search but it not work:  index=firewall srcip=10.x.x.x  | mispsearch misp_instance=IP_Block field=value | search dstip=ip=dst | table _time dstip ip-dst value action It cant get ip-dst from misp instance , Can anyone help me with this OR can i get some solution to resolve this  Many thanks and Best regards !!
Hi as @_JP said, MC is just for monitoring the whole Splunk environment. You really need it in any distributed environment. There are also some usable apps in splunkbase.splunk.com which you could l... See more...
Hi as @_JP said, MC is just for monitoring the whole Splunk environment. You really need it in any distributed environment. There are also some usable apps in splunkbase.splunk.com which you could look e.g. AdminAlerts. One way to avoid running out of disc space on indexers it migrate your environment to use volumes instead of use SPLUNK_DB on all index configurations. That way splunk automatic frozen needed data based on volume size instead of days or index sizes. r. Ismo
Hi as other already said this is doable. Here is my old post how to do it https://community.splunk.com/t5/Splunk-Enterprise/Migration-of-Splunk-to-different-server-same-platform-Linux-but/m-p/538062... See more...
Hi as other already said this is doable. Here is my old post how to do it https://community.splunk.com/t5/Splunk-Enterprise/Migration-of-Splunk-to-different-server-same-platform-Linux-but/m-p/538062 Of course in your case you must take care of connection between those sites. That must be enough fast to get this working. One other notice is that you must have same splunk version on all nodes in cluster. So you must select between update old first or install old version to AWS first. Based on amount of your data and connection speed between sites you could do this or other option is just create a new idx cluster on AWS and after it is in use then create additional (temporary) cluster for old onprem data and add it as a second cluster for your search heads. That way you could utilise e.g. snowball to transfer data from onsite to AWS (if your connection cannot handle it with reasonable time).   r. Ismo
One additional comment. You should use cgroups version 1.0 on RHEL9. See https://docs.splunk.com/Documentation/Splunk/9.1.1/Workloads/Requirements r. Ismo
Hi that should be fixed on all recent splunk products. You could read more from - https://www.splunk.com/en_us/surge/log4shell-log4j-response-overview.html - https://www.splunk.com/en_us/blog/bulle... See more...
Hi that should be fixed on all recent splunk products. You could read more from - https://www.splunk.com/en_us/surge/log4shell-log4j-response-overview.html - https://www.splunk.com/en_us/blog/bulletins/splunk-security-advisory-for-apache-log4j-cve-2021-44228.html r. Ismo
Good morning, I want to make sure how a value is expressed in sap_idoc_data table after having installed the AppDynamics monitoring solution for SAP 4Hana. Multiple fields are present. One in parti... See more...
Good morning, I want to make sure how a value is expressed in sap_idoc_data table after having installed the AppDynamics monitoring solution for SAP 4Hana. Multiple fields are present. One in particular is having me confused: Is the field PROCESSING_TIME expressed in seconds or in milliseconds? How can I understand what the PROCESSING_TIME corresponds to (is this time for an IDOC to be read by SAP before being sent outbound? or does it have another particular meaning?). Kind regards
I would say this partly covers shipping systemd journal logs to splunk. What I would really love is for splunk to be able to accept data sent by systemd-journal-upload ( https://www.freedesktop.org/... See more...
I would say this partly covers shipping systemd journal logs to splunk. What I would really love is for splunk to be able to accept data sent by systemd-journal-upload ( https://www.freedesktop.org/software/systemd/man/latest/systemd-journal-upload.service.html ). That way you'd not need a forwarder on any popular systemd distribution anymore. You could just use systemd.
Hi I think that this answer should work also in Splunk Cloud https://community.splunk.com/t5/Alerting/How-do-you-disable-enable-alerts-via-the-REST-API/m-p/441558 Just change that server url to co... See more...
Hi I think that this answer should work also in Splunk Cloud https://community.splunk.com/t5/Alerting/How-do-you-disable-enable-alerts-via-the-REST-API/m-p/441558 Just change that server url to correct and ensure that you have enabled REST api on your stack. r. Ismo
Hi @Pranitkolhe ... The Splunk Documentation got this nice document: https://docs.splunk.com/Documentation/Splunk/9.1.1/Security/ConfigureSplunkforwardingtousesignedcertificates  
Hi @Omar,   I’m a Community Moderator in the Splunk Community.  This question was posted 3 years ago, so it might not get the attention you need for your question to be answered. We recommend... See more...
Hi @Omar,   I’m a Community Moderator in the Splunk Community.  This question was posted 3 years ago, so it might not get the attention you need for your question to be answered. We recommend that you post a new question so that your issue can get the  visibility it deserves. To increase your chances of getting help from the community, follow these guidelines in the Splunk Answers User Manual when creating your post.   Thank you! 
Hi if I have understand right those two are the same. Entra is just a new name for Azure AD service.  Microsoft Entra ID is the new name for Azure AD. The names Azure Active Directory, Azure AD, an... See more...
Hi if I have understand right those two are the same. Entra is just a new name for Azure AD service.  Microsoft Entra ID is the new name for Azure AD. The names Azure Active Directory, Azure AD, and AAD are replaced with Microsoft Entra ID. Microsoft Entra is the name for the product family of identity and network access solutions. Microsoft Entra ID is one of the products within that family. r. Ismo
Hi there was presentation about this on last conf. You could found it from https://conf.splunk.com/watch/conf-online.html?search.event=conf23&search=SEC1936B#/ r. Ismo
Hi predict needs a time series data for make a forecast. Also it needs enough datapoints to do that forecast. Based on your need you should/could select user algorithm and other needed parameters o... See more...
Hi predict needs a time series data for make a forecast. Also it needs enough datapoints to do that forecast. Based on your need you should/could select user algorithm and other needed parameters or use just predict with field lists like index=_internal source=*/var/log/splunk/*.log | timechart count by sourcetype | fields splunkd splunkd_access | predict splunkd splunkd_access Could you share your current data (inside "</>" block)? r. Ismo