All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I mean you use up additional license amount for indexing additional data using the collect command unless you use the stash or stash_hec sourcetypes. So each events you firstly index into index A and... See more...
I mean you use up additional license amount for indexing additional data using the collect command unless you use the stash or stash_hec sourcetypes. So each events you firstly index into index A and then search, transform and collect into index B will cost you twice (roughly - depending on what you do with it in terms of processing before collecting) the license usage that it uses just be indexing it into index A. Whether you're within your license limits or not depends of course on the overall amount of ingested data and your license size.
Of course you _can_ do search & collect. It's just not something that's typically done since you'd have to first ingest the data "normally" and then split it using a search into another two indexes (... See more...
Of course you _can_ do search & collect. It's just not something that's typically done since you'd have to first ingest the data "normally" and then split it using a search into another two indexes (since you don't want group A to see index B and vice versa). And if you wanted to use original sourcetype (or any other sourcetype than stash or stash_hec), you'd get double your license usage. If there is not much data, that might be acceptable but typically it's a waste of perfectly good license And a waste of resources to search, split and collect. And additional lag on ingest. So that's why you don't typically do it this way. And I don't get why you would want to do separate apps? Anyway, now you're saying that you want to speed up searches whereas before you said that it's due to access restrictions. And there is definitely something to work on with your data format if you indeed have a mix of various formats within one json structure which might be an array or might not be an array... That seems to be calling for some sanitization process on ingest.
Splunk add-on for aws not working for cloudwatch logging. I have Splunk-add-on for AWS installed on my Splunk Search head. I am able to authenticate to cloudwatch and pull logs. It was working fine, ... See more...
Splunk add-on for aws not working for cloudwatch logging. I have Splunk-add-on for AWS installed on my Splunk Search head. I am able to authenticate to cloudwatch and pull logs. It was working fine, But since last couple of days not getting logs, I see no error coming in logs, and seeing events are being stored in old timestamp If i check indextime vs _time. Earlier it was not the case here, it was up to date. I dont see any error related to lag or as such. Splunk version : 9.2.1 Splunk add-on for AWS: 7.3.0 I checked this version is compatible with Splunk 9.2.1 version. Sharing snapshot which display indextime & _time difference. I tried disabling\enabling inputs but that also didnt help. What's the props being used for aws:cloudwatchlogs , whats the standard from cloudwatch? will this impact if someone has defined random format or custom timestamp for their lambda or gluejobs cloudwatch events?
Hi @BRFZ , I don't think that's a Splunk issue: see the generated logs. If it could be a splunk issue you could have a truncated log, but not a missing internal part of the event. Unless you have ... See more...
Hi @BRFZ , I don't think that's a Splunk issue: see the generated logs. If it could be a splunk issue you could have a truncated log, but not a missing internal part of the event. Unless you have a masking policy. Ciao. Giuseppe
I was not aware of the licensing implications, thank you and I'll stay in compliance.
Hi,  I need to update an sso_error HTML file in Splunk, but I'm not sure of the best approach. Could anyone provide guidance on how to do this? Thanks in advance for your assistance. 
For example, in some events, we have the IP address, while in others, we just see a dash ("-") or 0, even for the same event ID. Exemple :   <Event xmlns=' http://schemas.microsoft.com/win/2004/08... See more...
For example, in some events, we have the IP address, while in others, we just see a dash ("-") or 0, even for the same event ID. Exemple :   <Event xmlns=' http://schemas.microsoft.com/win/2004/08/events/event'><System><Provider Name='Service Control Manager' Guid='{555908d1-a6d7-4695-8e1e-26931d2012f4}' EventSourceName='Service Control Manager'/><EventID> 4624 </EventID><Version>0</Version><Level>4</Level><Task>0</Task><Opcode>0</Opcode><Keywords>0x8080000000000000</Keywords><TimeCreated SystemTime='2014-04-24T18:38:37.868683300Z'/><EventRecordID>412598</EventRecordID><Correlation/><Execution ProcessID='192' ThreadID='210980'/><Channel>System</Channel> <Computer>TEST</Computer><Security/></System><EventData><Data Name='SubjectUserSid'>S18</Data><Data Name='SubjectUserName'>BOB</Data><Data Name='SubjectDomainName'>GOZ</Data><Data Name='SubjectLogonId'>x0</Data><Data Name='TargetUserSid'>s20</Data><Data Name='TargetUserName'>BOBT</Data><Data Name='TargetDomainName'>TESTTGT</Data><Data Name='TargetLogonId'>x0</Data><Data Name='LogonType'>x</Data><Data Name='LogonProcessName'>usr </Data><Data Name='AuthenticationPackageName'>Negotiate</Data><Data Name='WorkstationName'>tst</Data><Data Name='LogonGuid'>{845152}</Data><Data Name='TransmittedServices'>-</Data><Data Name='LmPackageName'>-</Data><Data Name='KeyLength'>0</Data><Data Name='ProcessId'>mspam</Data><Data Name='ProcessName'>test.ee</Data><Data Name='IpAddress'>x.x.x.x</Data><Data Name='IpPort'>0</Data><Data Name='ImpersonationLevel'>%%1833</Data><Data Name='RestrictedAdminMode'>mlmpknnn</Data><Data Name='TargetOutboundUserName'>-</Data><Data </EventData></Event> In this example, it's related to the IP address and port. In some cases, we have a specific IP address, while in others, it's just a dash ("-"). Similarly, for the port, sometimes it shows a dash ("-"), and other times it shows a 0, or sometimes the port is correctly specified.
Sorry for the unclear message, I'd like to select whatever duration in the time picker i.e. last 30 mins / last 7 days and be able to look at the past data for the time period. So for the 30 mins to... See more...
Sorry for the unclear message, I'd like to select whatever duration in the time picker i.e. last 30 mins / last 7 days and be able to look at the past data for the time period. So for the 30 mins today, I'd look at today's 30 mins and then compare yesterdays 30 mins. Your query actually helps me do that however seems like there's a limit of 48 hours. In the time picker, I'd like to use the above to select (max) 7 days worth of data and look at the previous 7 days worth of data for that. If I wanted to do that would that be a different query or could I do that by editing the above query. Please do let me know if that was unclear Thanks,
From Step No.3 Install new Indexer nodes Please correct me if I'm wrong, The overall step that you mention are 1. Add all new Indexers to the same cluster. 2. Increase the replicate data between In... See more...
From Step No.3 Install new Indexer nodes Please correct me if I'm wrong, The overall step that you mention are 1. Add all new Indexers to the same cluster. 2. Increase the replicate data between Indexer.   #CM [clustering] max_peer_build_load = 20 (default 2) max_peer_rep_load = 50 (default 5)   3. Rebalance the data to reduce the bucket size on the old indexer and make copies of the data to the new indexer. 4. Put one of the old indexers in manual detention to prevent data replication to the old indexer   !!Do this one by one splunk edit cluster-config -manual_detention on   5. Use the splunk offline --enforce-counts command to stop the indexer and force the Cluster Master to copy the remaining primary buckets to the new indexer.   !!Do this one by one splunk offline --enforce-counts   6. Remove the old indexer from cluster.   !!Do this one by one splunk remove cluster-peers -peers <peer_id>  
Your requirement is unclear - do you want your 30 minutes for the last 7 days, or 30 minutes and 30 minutes 7 days ago, or 7 days and a different 7 days from some other point in the past?
The task guide for the Forage job sim states this:  For example, to add “Count by category” to your dashboard, type out sourcetype="fraud_detection.csv" | top category in the search field. This act... See more...
The task guide for the Forage job sim states this:  For example, to add “Count by category” to your dashboard, type out sourcetype="fraud_detection.csv" | top category in the search field. This action counts the number in each category Yet I am guessing Splunk has been updated since the task guide was created because the search doesn't register the command. I have tried others but, am not receiving the desired results. Does anyone know about this? or a different command to give me a valid bar chart in visualization?
Dear Members,   I'm new in splunk, i'm trying to forward the RHEL logs to the indexer. i've done all the necessary configuration to forward the logs, but didn't receive on the indexer. When i check... See more...
Dear Members,   I'm new in splunk, i'm trying to forward the RHEL logs to the indexer. i've done all the necessary configuration to forward the logs, but didn't receive on the indexer. When i checked the status forward server, using ./splunk list forward-server command. It was showing inactive. Because of some file ownership.  Warning: Attempting to revert the SPLUNK_HOME ownership Warning: Executing "chown -R splunkfwd:splunkfwd /opt/splunkforwarder" So, I ran the below command to change the file ownership.  sudo chown -R splunkfwd:splunkfwd /opt/splunkforwarder. After executing the above command, still i'm not receiving the logs and moreover, when again i'm trying to run this "./splunk list forward-server" command to check the status of forwarder, it will takes me to enter username and password again, when i'm entering the username and password its showing login failed. NOTE: I've tried to login using both root and splunk user. But non of this worked. Please help me out in this, what should i do to make it work.   Thank you. 
@ITWhisperer what would I need to do if I wanted to look at a bigger window?  My max would be to pick 7 days in my time picker, how would i edit the above to look at that? Thank you in advance
Hey PickleRick, Yeah I was thinking this. The data is coming in through modular input so if I adjust the script then I be able to parse them into their respective indexes. But if I'm doing so then I... See more...
Hey PickleRick, Yeah I was thinking this. The data is coming in through modular input so if I adjust the script then I be able to parse them into their respective indexes. But if I'm doing so then I may as well create separate applications altogether for each one which is what I'm trying to avoid with this exercise. Regarding the data, yes this is a much simpler example of more complicated data I'm working with. Essentially each event is JSON data with values that are either string or [array]. archetype is [array] and can be both superhero and villain so this event should appear in both indexes (but I've simplified it for this example). So is there no possible way to utilise and bypass summary indexing rules by any chance to meet my desired use case? Because I'm still trying to summarise my data by separating superhero and villains to speed up searches. Seems like a lot of work to simply want to create separate indexes based on search. Thanks,
Hi @BRFZ , could you share some sample of your logs: both complete and incomplete logs? Ciao. Giuseppe
Hi All, Wanted to know whether AppDynamics can monitor Salesforce in any way at all? I saw some posts mentioning manual injector for End user monitoring on Salesforce frontend but any more details w... See more...
Hi All, Wanted to know whether AppDynamics can monitor Salesforce in any way at all? I saw some posts mentioning manual injector for End user monitoring on Salesforce frontend but any more details we can capture at all from salesforce? Please suggest your experience if anyone has tried out any custom monitoring. Am looking for some ways through which we can get close to APM monitoring like metrics.
Hello @gcusello, The missing data includes certain event IDs that don’t appear at all, and there are also instances where information is incomplete. For example, several fields are filled with d... See more...
Hello @gcusello, The missing data includes certain event IDs that don’t appear at all, and there are also instances where information is incomplete. For example, several fields are filled with dashes ("-"), indicating a lack of information.
I am using HEC to receive various logs from Firehose, HEC is allowed to use index names AWS & palo_alto. The default index is set to AWS. All the logs coming from the HEC are assigned to default i... See more...
I am using HEC to receive various logs from Firehose, HEC is allowed to use index names AWS & palo_alto. The default index is set to AWS. All the logs coming from the HEC are assigned to default index AWS and default sourcetype aws:firehose I am using the below config to change the sourcetype and index name of the logs.      props.conf [source::syslog:dev/syslogng/*] TRANSFORMS-hecpaloalto = hecpaloalto, hecpaloalto_in disabled = false transforms.conf [hecpaloalto] REGEX = (.*) DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::pan:log [hecpaloalto_in] REGEX = (.*) DEST_KEY = _MetaData:Index FORMAT = palo_alto   The sourcetype has changed to pan:log and its as intended but the Index name is still displaying as AWS, instead of changing to palo_alto. HEC config have default index as aws and selected indexes are aws, palo_alto Is there anything wrong in my config?
The forwarder asset table is generated from tcpin_connections metrics in _internal.  FTR, this is done by the Monitoring Console (MC), not the Cluster Manager (CM).  The CM and MC can be co-located i... See more...
The forwarder asset table is generated from tcpin_connections metrics in _internal.  FTR, this is done by the Monitoring Console (MC), not the Cluster Manager (CM).  The CM and MC can be co-located in limited conditions - see https://docs.splunk.com/Documentation/Splunk/9.3.0/Indexer/Systemrequirements#Additional_roles_for_the_manager_node. Seeing the same forwarder many times often happens when a host is cloned without first preparing the forwarder for cloning.  See the CLONEPREP installer option at https://docs.splunk.com/Documentation/Forwarder/9.3.0/Forwarder/InstallaWindowsuniversalforwarderfromaninstaller. The fix is to delete the GUID on each server (using the splunk clone-prep-clear-config command) and then restart Splunk so it generates a new GUID.  Then have the Monitoring Console generate a new forwarder assets table.
I don't see any fields extracted under in the search head.  This config is placed in the heavy forwarder in the same app where the input is mentioned. Even in the search head Extract Fields teste... See more...
I don't see any fields extracted under in the search head.  This config is placed in the heavy forwarder in the same app where the input is mentioned. Even in the search head Extract Fields tester the Regex just gives a check mark for all the events saying its a valid regex but doesn't display any Events. Assuming $1::$2 will be used to assign the field name and field value.