All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Sure, it is. But it's formally a different command
CLONE_SOURCETYPE makes a clone of the event you have, sets a sourcetype that you provide for it and pushes it back into the front of the processing pipeline. I'm not 100% sure (you'd have to test it... See more...
CLONE_SOURCETYPE makes a clone of the event you have, sets a sourcetype that you provide for it and pushes it back into the front of the processing pipeline. I'm not 100% sure (you'd have to test it) but I'd assume if you overwrote source and host before arriving at the transform cloning the event, you'd have your new host and source applied. * The duplicated events receive index-time transformations & sed commands for all transforms that match its new host, source, or source type. * This means that props.conf matching on host or source will incorrectly be applied a second time. So yep, something like your props.conf but. 1. The set-sourcetype transform would have to use CLONE_SOURCETYPE to recast the sourcetype to your linux_audit 2. You'd have to make sure that your transforms are aplied in proper order (firstly adjust the metadata, then clone sourcetype, finally drop to nullqueue)
After installation and configuration of machine agent on local machine to collect metrics, Metrics are not populating properly and data which is displayed is not complete. We are not able to see CPU ... See more...
After installation and configuration of machine agent on local machine to collect metrics, Metrics are not populating properly and data which is displayed is not complete. We are not able to see CPU percentage, Memory percentage etc.. Please do suggest how to pull in complete metrics into AppDynamics, Is there any configurations file changes needed or any config changes in AppDynamics UI
Hi @pvarelab, sorry but your question isn't so clear for me: you have an On-Premise DS that you use to deploy Apps to your on-premise Forwarders. At firsrt, I hint to put two Heavy Forwarders as C... See more...
Hi @pvarelab, sorry but your question isn't so clear for me: you have an On-Premise DS that you use to deploy Apps to your on-premise Forwarders. At firsrt, I hint to put two Heavy Forwarders as Concentrators to avoid to open an internet connection from all systems and Splunk Cloud. If you haven't so many clients, you could use one of these HFs as DS. Then in the DS you store all the apps to deploy to the clients and you deploy them based on ServerClasses. Why do you want to manage a precedence in installation? you should deploy already configured apps. The only attention point is to analyze your deploy requirement and design very carefully your ServerClasses. Ciao. Giuseppe
I installed CyberChef on Splunk Enterprise, that's it. I was trying to test out the application on a local machine and an install of Splunk with a "free" license. This did not work at all, so I roll... See more...
I installed CyberChef on Splunk Enterprise, that's it. I was trying to test out the application on a local machine and an install of Splunk with a "free" license. This did not work at all, so I rolled out the app in our Enterprise test environment and there it did work. In other words, I did not really do anything to "fix" it, it just worked once there was a valid enterprice license.
Hi @isoutamo , probably it isn't so clear for me how HEC works but how can the endpoint be relevant? events are processed and parsed as usual: Parsing, Merging, Typing and Indexing. The issue I su... See more...
Hi @isoutamo , probably it isn't so clear for me how HEC works but how can the endpoint be relevant? events are processed and parsed as usual: Parsing, Merging, Typing and Indexing. The issue I suppose that's in the precedence on events in the activities listed in the props.conf. Ciao. Giuseppe  
Hi @PickleRick , thank you for your answer, sorry but I don't understand: if I clone sourcetype, can I pass also host and source values (that I extracted from the json fields) to the new one? if I... See more...
Hi @PickleRick , thank you for your answer, sorry but I don't understand: if I clone sourcetype, can I pass also host and source values (that I extracted from the json fields) to the new one? if I clone sourcetype, do you think that I can apply transformations to the new sourcetype? the send to NullQueue, can run after the host and source overriding and the cloning? Let me understand, you hint a props.conf like the following: [logstash] # set host TRANSFORMS-sethost = set_hostname_logstash # set sourcetype Linux TRANSFORMS-setsourcetype_linux_audit = set_sourcetype_logstash_linux_audit # set source TRANSFORMS-setsource = set_source_logstash_linux # send to NullQueue TRANSFORMS-send_to_NullQueue = send_to_NullQueue # restoring original raw log [linux_audit] SEDCMD-raw_data_linux_audit = s/.*\"message\":\"([^\"]+).*/\1/g Is it correct? Ciao. Giuseppe
Hi @gwen, sorry but I don't understand what you mean with variable. A Correlation Search is an alert, so you canno pass a token to it. Could you share your complete Correlation Search source code?... See more...
Hi @gwen, sorry but I don't understand what you mean with variable. A Correlation Search is an alert, so you canno pass a token to it. Could you share your complete Correlation Search source code? Ciao. Giuseppe
Hey @fatsug can you please, please be a little bit more concrete? Because I am having exact the same issue but it IS on Splunk Enterprise. So, what did you do to fix it? Thanks in advance
I have a Splunk Cloud instance where we send logs from servers with the Universal Forwarder installed. All UF are managed by a Deployment Server. My questión is: what are the best practices on howw ... See more...
I have a Splunk Cloud instance where we send logs from servers with the Universal Forwarder installed. All UF are managed by a Deployment Server. My questión is: what are the best practices on howw to organize apps, both Splunkbase downloaded and in-house built and also configuration-only apps, if they are a best practice? Right now we are experimenting with deploying the Splunkbase apps as they are (easier to update them) and deploying the configuration in an extra app named starting with numbers so its configuration takes precedence. But we have run into some issues in the past with this approach.
Hi All, I have a requirement to Onboard Data from a website like http://1.1.1.1:1234/status/v2 and its a vendor managed API url so Application team cannot use the HEC Token option. so I have prepar... See more...
Hi All, I have a requirement to Onboard Data from a website like http://1.1.1.1:1234/status/v2 and its a vendor managed API url so Application team cannot use the HEC Token option. so I have prepared the script to get the Data and tested it Locally and the script works as expected. I have created a forwarder app with bin folder and kept the script in that and pushed the App to one of our Integration Forwarder but unable to get any data in Splunk. I have tested the connectivity between our IF and the URL and its successful( Did a Curl to that URL and able to see the URL content) I have checked firewall and permissions , all seems to be ok but still I am unable to get data in splunk. Also I checked internal index but don't find anything there. Can someone guide me what else I need to check in order to get this fixed. Below is my inputs: [monitor://./bin/abc.sh] index=xyz disabled=false interval = 500 sourcetype=script:abc source=abc.sh I have also created props as below: [script:abc DATETIME_CONFIG = CURRENT SHOULD_LINEMERGE = true 
When I need INGEST_EVAL I (almost) always use Splunk GUI to test it. You should replace normal rex command with replace (which is actually rex :-). Then just add all those eval commands in one line.
Hello Experts,   This is a long searches, explored query that I am getting a way around. If we do a simple query like this     index=zzzzzz | stats count as Total, count(eval(txnStatus="FAILE... See more...
Hello Experts,   This is a long searches, explored query that I am getting a way around. If we do a simple query like this     index=zzzzzz | stats count as Total, count(eval(txnStatus="FAILED")) as "Failed_Count", count(eval(txnStatus="SUCCEEDED")) as "Passed_Count" by country, type, ProductCode | fields country, ProductCode, type, Failed_Count, Passed_Count, Total     This above simple query gives me a result table where the total belongs to the specific country and productCode i.e. individual Total Now there is this field 'errorinfo' -  what I want is that I want to show the 'errorinfo' if its "codeerror"  as well in the above list like this   index=zzzzzz | stats count as Total, count(eval(txnStatus="FAILED")) as "Failed_Count", count(eval(txnStatus="SUCCEEDED")) as "Passed_Count" by country, type, ProductCode, errorinfo | fields country, ProductCode, type, Failed_Count, Passed_Count, errorinfo, Total   This table shows results like this below country ProductCode type Failed_Count Passed_Count errorinfo Total usa 111 1c 4 0 wrong code value 4 usa 111 1c 6 0 wrong field selected 6 usa 111 1c 0 60 NA 70   How can I do so that I can see the results like this where Total remains the complete total  of field txnStatus (FAILED+SUCCEEDED) like below table - If I can achieve this I can do % total as well, if you see the Total belongs to one country - usa total shows usa total and canada total shows can total   country ProductCode type Failed_Count errorinfo Total usa 111 1c 4 wrong code value 70 usa 111 1c 6 wrong field selected 70 can 222 1b 2 wrong entry 50 can 222 1b 6 code not found 50 country ProductCode type Failed_Count errorinfo Total usa 111 1c 4 wrong code value 70 usa 111 1c 6 wrong field selected 70     Thanks in advance Nishant
hello, i have a correlation search with variable that does'nt work | stats count by host | eval hello_world = host when im looking in incident review, my alerte show $hello_word$ and not my value... See more...
hello, i have a correlation search with variable that does'nt work | stats count by host | eval hello_world = host when im looking in incident review, my alerte show $hello_word$ and not my values host. Can you help me please ? splunk ver 7.3.5
Autoregress is the same as | streamstats window=2 current=f last(Score) as Score_p1
Thank you, sir, for the inputs share. Will come back if something needed.
Just do a lookup using both fields (source IP and destination host) and output one of those fields as a new field. Something like | lookup allowed_ips IP AS src_ip HOST as dst_host OUTPUT HOST AS m... See more...
Just do a lookup using both fields (source IP and destination host) and output one of those fields as a new field. Something like | lookup allowed_ips IP AS src_ip HOST as dst_host OUTPUT HOST AS matchhost This will create a field called matchhost which will be populated only if both src_ip and dst_host in your event match one of the entries from your lookup. You can now search for the events matching or not matching your criteria by verifying if matchhost is null or not.
Events are ingested one at a time and there is no "state" which could be carried over from one event to another. So limiting your ingestion this way directly in Splunk is not possible. If you can de... See more...
Events are ingested one at a time and there is no "state" which could be carried over from one event to another. So limiting your ingestion this way directly in Splunk is not possible. If you can define it not as "ingest only first 3 events" but as "(don't) ingest events (not) matching given patter", that's another story.
If I understand you correctly - you want to correct your sourcetype first and then fire transforms for the new sourcetype, right? It won't work that way. The set of transforms to execute is set at th... See more...
If I understand you correctly - you want to correct your sourcetype first and then fire transforms for the new sourcetype, right? It won't work that way. The set of transforms to execute is set at the beginning of the pipeline based on the event's sourcetype/source/host and is not changed later even of you overwrite those metadata fields. The only way I see to do it would be to CLONE_SOURCETYPE to make a copy of your event with a new event's sourcetype set properly (this one will be processed from the beginning using the new sourcetype's props and transforms) and drop the original event bu sending it to  nullQueue. Yes, it does create some processing overhead but I don't see another way if you can't make your source send reasonably formatted data.
Hi All, The below 10 Error Records have, Last 3 Error records need not ingested, the above 7 error records data only be ingested.How do we write the Regular Expression ,please guide me.     2023-... See more...
Hi All, The below 10 Error Records have, Last 3 Error records need not ingested, the above 7 error records data only be ingested.How do we write the Regular Expression ,please guide me.     2023-11-06 15:30:48,941 ERROR pool-1-thread-1 com.veeva.brp.batchrecordprint.ScheduledTasks - PCI ERROR: No package class found with name: PRD-QDB35801A 2023-11-06 15:30:48,941 ERROR pool-1-thread-1 com.veeva.brp.batchrecordprint.ScheduledTasks - PCI ERROR: (INVALID_DATA) Invalid value [V5100000003P211] specified for parameter [package_class__c] : Object record ID does not resolve to a valid active [package_class__c] 2023-11-06 15:30:48,941 ERROR https-jsse-nio-8443-exec-9 com.veeva.brp.batchrecordprint.BatchRecordPrintController - PRINT ERROR: Print failure response 2023-11-06 15:30:48,941 ERROR pool-1-thread-1 com.veeva.brp.batchrecordprint.ScheduledTasks - Unknown error: {errorType=GENERAL, responseStatus=EXCEPTION, responseMessage=502 Bad Gateway} 2023-11-06 15:30:48,941 ERROR https-jsse-nio-8443-exec-2 com.veeva.brp.batchrecordprint.BatchRecordPrintController - (API_LIMIT_EXCEEDED) You have exceeded the maximum number of authentication API calls allowed in a [1] minute period. 2023-11-06 15:30:48,941 ERROR pool-1-thread-1 com.veeva.brp.batchrecordprint.ScheduledTasks - PCI ERROR: No package class found with name: PR01-PU3227V1MSPS 0001 2023-11-08 06:19:49,539 ERROR https-jsse-nio-8443-exec-1 com.veeva.brp.batchrecordprint.BatchRecordPrintController - DOCLIFECYCLE ERROR: Error initiating lifecycle action for document: 5742459, Version: 0.1 2023-10-25 10:56:46,710 ERROR pool-1-thread-1 com.veeva.bpr.batchrecordprint.scheduledTasks - Header Field Name: bom_uom_1_c, value:E3HR5teHlfOQjzUJ74jTdKh1Tu0yajHqT/H98klZOyU= 2023-10-25 10:56:46,711 ERROR pool-1-thread-1 com.veeva.bpr.batchrecordprint.scheduledTasks - BOM Field Name: BOM_Added_1, value is out of Bounds using beginIndex:770, endIndex:771 from line: 2023-10-25 10:56:46,711 ERROR pool-1-thread-1 com.veeva.bpr.batchrecordprint.scheduledTasks