All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello! You are correct. I had to dig into it and found out that the primaryGroupID is considered an "implicit membership." It is uncommon to change but Guest is 514, as an example. The issue happens... See more...
Hello! You are correct. I had to dig into it and found out that the primaryGroupID is considered an "implicit membership." It is uncommon to change but Guest is 514, as an example. The issue happens with the Guest user account as well since it is (traditionally) only a member of the security group called Domain Guests. I was able to confirm this using the Windows LDP tool. Apparently, I just never had to use LDAP to actually query for all memberships in the past, it was always using third-party tools which would include even the "implicit" memberships.
Hello! I maintain Splunk reports. Some of the Pivot reports are based on a Dataset that is generated based on a simple search. Duplicate values ​​have not been taken into account in the generation. ... See more...
Hello! I maintain Splunk reports. Some of the Pivot reports are based on a Dataset that is generated based on a simple search. Duplicate values ​​have not been taken into account in the generation. Due to an error, there were two data sources for a few weeks. This resulted in identical duplicate rows in the dataset. In the future, duplicate rows can be removed from the dataset with a simple dedup. However, are there any best practices to fix this?
1. Normally a non-admin user should not have this capability. This is normally used for maintaining credentials which are used for third party integrations (modular inputs, custom alert actions). 2.... See more...
1. Normally a non-admin user should not have this capability. This is normally used for maintaining credentials which are used for third party integrations (modular inputs, custom alert actions). 2. This works for credentials managed in the official Splunk way. If - for some reason - an addon developer decided to do something "their own way" (for example - decided that for each run of an input, it will pull credentials from a github project; no, that's not a real example but nothing is forbidding an addon author from inventing anything, no matter how stupid), that will most probably not be limited by this capability. 3. Obviously if there are credentials for access stored for use in automated way, you should have additional controls implemented on the destination system mitigating risk of abuse of those credentials. Their use of course should based on the rule of least required privilege and ideally they should be limited per IP. At the very least, if there is no other way, their use in the destination system should be monitored and reviewed regularly.
@livehybrid  @kiran_panchavat  thank you very much. | set the props like this, and field extractions are fine now. [netiq] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+)<\d+> TIME_PREFIX ... See more...
@livehybrid  @kiran_panchavat  thank you very much. | set the props like this, and field extractions are fine now. [netiq] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+)<\d+> TIME_PREFIX = rt= TIME_FORMAT = %s%3N MAX_TIMESTAMP_LOOKAHEAD = 20 KV_MODE = auto TRUNCATE = 99999
@ljvc I appreciate the information you were able to provide, this is helpful. On a side note I do have an active case open with Splunk support on this topic. Their latest update was that this has bee... See more...
@ljvc I appreciate the information you were able to provide, this is helpful. On a side note I do have an active case open with Splunk support on this topic. Their latest update was that this has been a reported issue, and that they expect it to be addressed in ES 8.2 per an internal JIRA ticket.
Yes, that is correct. We are using admon in the default. I'll give that a go. Also, if I wanted to also limit it by that and then the destination IP, would I use a transforms.conf for that? Many Thanks
Hi @Mobyd  Please could you confirm - is this using an admon:// input? If so you should be able to specify a "startingNode" which would the OU which you would like to monitor. https://docs.splunk.... See more...
Hi @Mobyd  Please could you confirm - is this using an admon:// input? If so you should be able to specify a "startingNode" which would the OU which you would like to monitor. https://docs.splunk.com/Documentation/Splunk/latest/admin/Inputsconf#:~:text=startingNode%20%3D%20%3Cstring%3E  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @tech_g706  The issue here is that syslog arrives with the default syslog sourcetype and you shouldnt start applying field extractions to a default sourcetype. Do you use the syslog input for an... See more...
Hi @tech_g706  The issue here is that syslog arrives with the default syslog sourcetype and you shouldnt start applying field extractions to a default sourcetype. Do you use the syslog input for any other feeds? If you dont use it for other feeds then one thing you could do is change/specify the sourcetype in the syslog input stanza to something specific to something like "netiq:log" and then you can apply your relevant props/transforms based on this sourcetype. However, if you are you using the syslog input for other feeds too then you would need to use some other props/transforms to determine IF it is netIQ and then apply props accordingly, such as changing the sourcetype for that data. The other thing which you might want to look at is using Splunk for Syslog (SC4S) which supports the CEF format that NetIQ is sending, check out the relevant SC4S docs here   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi,      I am trying to gather data from a specific organisation unit in Active Directory and ignore everything else? I have tried with a transforms.conf to allow it but didn't seem to work.  I coul... See more...
Hi,      I am trying to gather data from a specific organisation unit in Active Directory and ignore everything else? I have tried with a transforms.conf to allow it but didn't seem to work.  I could sort of get it to work by writing a block for everything else but its a bit of a pain as the environment is shared. Any one had any experience  doing this sort of thing?
Hi! Yes, here is the complete search: $case_token$ sourcetype=hayabusa $host_token$ $level_token$ $rule_token$ | table Timestamp, host, Computer, Level, Channel, RecordID, EventID, Ruletitle, Detail... See more...
Hi! Yes, here is the complete search: $case_token$ sourcetype=hayabusa $host_token$ $level_token$ $rule_token$ | table Timestamp, host, Computer, Level, Channel, RecordID, EventID, Ruletitle, Details, * Channel is added as a field in the table command, as well as specified in the code: <fields>Timestamp, host, Computer, Level, Channel, RecordID, EventID, RuleTItle, Details</fields>
@tech_g706  The default syslog sourcetype is too generic and often leads to improper parsing, as it’s not tailored to specific log formats like NetIQ. Instead, create a custom sourcetype to handle t... See more...
@tech_g706  The default syslog sourcetype is too generic and often leads to improper parsing, as it’s not tailored to specific log formats like NetIQ. Instead, create a custom sourcetype to handle the unique structure of NetIQ logs.   1. Review the Answers post   Solved: What are the best practices for defining source ty... - Splunk Community   Reference:    https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/Listofpretrainedsourcetypes  https://kinneygroup.com/blog/splunk-magic-8-props-conf/   
Hi All, Anyone who has worked with OpenText NetIQ Logs before? We are receiving the NetIQ logs via syslog, but the sourcetype is set as default 'syslog', and field extractions are not performed pro... See more...
Hi All, Anyone who has worked with OpenText NetIQ Logs before? We are receiving the NetIQ logs via syslog, but the sourcetype is set as default 'syslog', and field extractions are not performed properly. Since I do not find any Splunk TA for NetIQ logs, I would need some suggestions for the source type assignment for NetIQ logs.  Thank you
i have disabled like under - Coniguration-Instrumentations- Error Detections-Ignored Messages - xxxx.Logs.EventLogger : Error Message: Error but still creating noice. please guide how to stop this. ... See more...
i have disabled like under - Coniguration-Instrumentations- Error Detections-Ignored Messages - xxxx.Logs.EventLogger : Error Message: Error but still creating noice. please guide how to stop this.   Thanks  
we have .Net application & getting continuassly events ever 20 S. Can you please guide me how to stop noice. I have disabled but still persist.   Thanks Dinesh 
Hi, Thanks for the reply, it gives me all the details but its making all the details in event in columns. Could you please help me how can i get selected fields in a table. Like : PNO | Engine|Sta... See more...
Hi, Thanks for the reply, it gives me all the details but its making all the details in event in columns. Could you please help me how can i get selected fields in a table. Like : PNO | Engine|Status|Service Acount.   Regards, AKM
Hi @Varun18  Its not easy to get a list of all the usernames, but passwords is easy with the /services/storage/passwords endpoint. However you might have some success with the following search I've... See more...
Hi @Varun18  Its not easy to get a list of all the usernames, but passwords is easy with the /services/storage/passwords endpoint. However you might have some success with the following search I've put together. It uses a map command so be careful - it gathers the passwords then attempts to reconstruct the stanza from the config file it originated in! | rest /services/storage/passwords | search clear_password!="``splunk_cred_sep``S``splunk_cred_sep``P``splunk_cred_sep``L``splunk_cred_sep``U``splunk_cred_sep``N``splunk_cred_sep``K``splunk_cred_sep``" | table clear_password realm username | rex field=realm ".+\#(?<app>[^\#]+)\#(?<configPath>.+)" | table app configPath username * | rex field=username "(?<stripUsername>[^\`]+)" | stats latest(*) AS *, list(clear_password) as concat_clear_password by configPath username app | eval restPath="/servicesNS/-/-/".configPath."/".stripUsername | map maxsearches=100 search=" | rest $restPath$ | foreach * [| eval secretField=mvappend(secretField,IF('<<FIELD>>'==\"******\",\"<<FIELD>>\",null()))] | eval clear_password=\"$concat_clear_password$\" | eval configPath=\"$configPath$\" | eval app=\"$app$\" | fields - eai:* author disabled published updated splunk_server " | rex field=configPath "configs/conf-(?<configFileName>[^\/]+)" | eval isJson=IF(json_valid(clear_password),"isJson","NotJson") | tojson | eval jsonKeys=json_array_to_mv(json_keys(_raw)) | eval stanza="==".app."/".configFileName.".conf== [".title."] " | foreach jsonKeys mode=multivalue [| eval stanza=stanza.IF(<<ITEM>> IN ("id","secretField","title","configFileName","configPath","isJson","clear_password","app"),"",<<ITEM>>."=".coalesce(json_extract(clear_password,<<ITEM>>),json_extract(_raw,<<ITEM>>))." ")] | table stanza  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Hello @Navneet_Singh @MaureenLynch , As mentioned by @livehybrid This is known bug in add-on so you have to wait till it gets fixed in coming release. However as a temporary solution, after deleti... See more...
Hello @Navneet_Singh @MaureenLynch , As mentioned by @livehybrid This is known bug in add-on so you have to wait till it gets fixed in coming release. However as a temporary solution, after deleting the desired row(s), please make a minor edit to any other cell (e.g., insert a blank space) before saving the file. This forces the app to register the change and update the lookup.
Wait. You're mixing different things here. If you have very low memory usage and there are still some pages swapped out it means that you have huge chunks of process memory which has not been used f... See more...
Wait. You're mixing different things here. If you have very low memory usage and there are still some pages swapped out it means that you have huge chunks of process memory which has not been used for a long time (for example, a daemon which is just sleeping for most of the time and most of its code and data is never accessed). In that case it's indeed better for the OS to swap it out and use the freed memory pages for cache/buffers. One big caveat though - if at some point the process requests access to those swapped out pages the kernel will start loading them from the disk. If it's only at the price of dropping some cache pages probably noone will even notice. But if it needs to swap out some active memory pages... that might get ugly. And even with modern systems with NVMe disks (which are not that widespread yet) RAM access is way faster than disk transfer.
[Solution] @buzzard192 You can also successfully handle multivalues by following these steps: 1. sample log : /opt/log/iprange.log [14/May/2025:14:22:11] systemIP="192.168.1.10,10.10.10.10"   2.... See more...
[Solution] @buzzard192 You can also successfully handle multivalues by following these steps: 1. sample log : /opt/log/iprange.log [14/May/2025:14:22:11] systemIP="192.168.1.10,10.10.10.10"   2. lookup file :  /opt/splunk/etc/apps/myapp/lookups/systemIPLookup.csv cidr,location,region 192.168.1.0/24,Site-A,East 10.10.10.0/24, Site-B,East   3. transforms.conf file : /opt/splunk/etc/apps/myapp/local/transforms.conf [IPRange] INGEST_EVAL = systemIP=replace(_raw, ".*systemIP=\"([^\"]+)\".*","\1"), systemIP:=split(systemIP,","), JSON=lookup("IPRangeLookup", json_object("cidr", $mv:systemIP$), json_array("location", "region")) [IPRangeLookup] batch_index_query = 1 case_sensitive_match = 1 filename=systemIPLookup.csv match_type = CIDR(cidr) max_matches = 1   4. props.conf file : /opt/splunk/etc/apps/myapp/local/props.conf [(?::){0}host::*] TRANSFORMS = IPRange   5. Result  
As @sylim_splunk already stated. Its managed by the OS. If your memory usage however is minimal and swap is completely used, it is usually no problem. Especially on modern Servers with NVMe SSD sto... See more...
As @sylim_splunk already stated. Its managed by the OS. If your memory usage however is minimal and swap is completely used, it is usually no problem. Especially on modern Servers with NVMe SSD storage. If you really dont want the system to swap you can disable swap via: sudo swapoff -a Keep in mind in case the system uses all RAM and swap is off, the OOM-Killer in Linux might kill your splunk processes, which can lead to loss of searches/searchresults.