All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I'm able to calculate the time difference between the start and end time of my job. I want to display the string value in bar chart how to achieve this. index=music Job=* | eval Duration=(end-start... See more...
I'm able to calculate the time difference between the start and end time of my job. I want to display the string value in bar chart how to achieve this. index=music Job=* | eval Duration=(end-start_time) | chart values(Duration) as Duration by "Start Time"
@kiran_panchavat , thanks but it's still not clear to me. Do you mean this sentence in the solution you gave ? "Alerts are triggered if the specified search yields a non-empty search result list." ... See more...
@kiran_panchavat , thanks but it's still not clear to me. Do you mean this sentence in the solution you gave ? "Alerts are triggered if the specified search yields a non-empty search result list."   It still looks like a bug to me or at least it's  very unclear.
I need to upload CSS and HTML file on Splunk Cloud, Please help me with the steps to upload and use it in Dashboard customise. Since there is no option for upload asset in Splunk Cloud.
Hi @gcusello - Will this work if we give some more values to be considered for indexing in transforms.conf? [setparsing] REGEX = systemd | auditd | CROND DEST_KEY = queue FORMAT = indexQueue [se... See more...
Hi @gcusello - Will this work if we give some more values to be considered for indexing in transforms.conf? [setparsing] REGEX = systemd | auditd | CROND DEST_KEY = queue FORMAT = indexQueue [setnull] REGEX = . DEST_KEY = queue FORMAT = nullQueue
At times, LKUP_DSN will match exactly with DS_NAME. In other instances, LKUP_DSN will contain all the characters of DS_NAME except for the last nine characters.
Hello Nagarjuna, Thanks for posting your questions to our community. > How is the compatibility between OTel collector and AppDynamics, is it efficient and recommendable? It depends on the ag... See more...
Hello Nagarjuna, Thanks for posting your questions to our community. > How is the compatibility between OTel collector and AppDynamics, is it efficient and recommendable? It depends on the agent type because not all languages are supported. Please check the doc(Backend Languages): https://docs.appdynamics.com/appd/24.x/latest/en/application-monitoring/splunk-appdynamics-for-opentelemetry/support-for-cisco-appdynamics-for-opentelemetry > If we use Otel collector for data exporting to AppD..is it still required to use AppD agents also? Not necessary. However, if you have an application that is monitored with AppDynamics Java, .NET, or Node.js Agents, you can instrument AppDynamics agents in your application to report both OpenTelemetry span data and AppDynamics SaaS data. Please check the doc: https://docs.appdynamics.com/appd/24.x/latest/en/application-monitoring/splunk-appdynamics-for-opentelemetry/instrument-applications-with-splunk-appdynamics-for-opentelemetry > How the Licensing will work if we use Otel for data exporting to AppD? Please refer to the doc: https://docs.appdynamics.com/appd/24.x/latest/en/splunk-appdynamics-licensing/license-entitlements-and-restrictions#id-.LicenseEntitlementsandRestrictionsv24.9-AppDynamicsforOpenTelemetry%E2%84%A2 > Otel collector is compatible with both on premise and SaaS Environment of AppD?. Only SaaS controller supports OpenTelemetry Please check the doc(Before your Begin): https://docs.appdynamics.com/appd/24.x/latest/en/application-monitoring/splunk-appdynamics-for-opentelemetry/view-opentelemetry-data-in-the-controller-ui#id-.ViewOpenTelemetryDataintheControllerUIv24.7-BeforeYouBegin Hope these information helps. Xiangning
Please show what your mailto link looks like in your XML 
The mvfunctions generally take an MV field as an input and then perform an operation on each of the values of the MV, so the solution | eval filtered=mvfilter(mvfield!="N/A") is saying - for each ... See more...
The mvfunctions generally take an MV field as an input and then perform an operation on each of the values of the MV, so the solution | eval filtered=mvfilter(mvfield!="N/A") is saying - for each value of the MV field called mvfield match each one against the string "N/A" and if it does NOT match (!="N/A") then return that value to the new field filtered, appending each non-matching value to that new field. That new field will then contain all the values of the original mvfield that did not match the string. The eval is then finally putting back the "N/A" string to the filtered field so that if ALL values of the original field contained N/A then the new field will have a single N/A value. If you wanted ALL the N/A instances to be present, then replace the mvfilter line with | eval filtered=coalesce(mvfilter(mvfield!="N/A"), mvfield) which if you have N/A 3 times in your original, you will have N/A 3 times in your final result.  
mvfilter() is indeed the way to go but you need to do a bit more bending over backwards to get it only when you need it. A run-anywhere example: | makeresults format=csv data="row 1;2;1;3 1;2;3;4... See more...
mvfilter() is indeed the way to go but you need to do a bit more bending over backwards to get it only when you need it. A run-anywhere example: | makeresults format=csv data="row 1;2;1;3 1;2;3;4 1;1;1;1 4;3;2;4;5 1;1;2;3 1;4;3;2 3;4;5;2 5;5;5 1;1" | eval split=split(row,";") ``` This creates a set of example data``` | eval totalcount=mvcount(split) ``` This calculates how many elemets we have``` | eval onecount=mvcount(mvfilter(if(split="1",true(),false()))) ``` This count how many ones we have``` | eval filtered=if(onecount>0 AND onecount<totalcount,mvfilter(if(split="1",false(),true())),split) ``` And this filters the ones but only if there was at least one (that's generally not needed) and there is less ones than all values ```
@dsky55  Splunk Enterprise 9.x officially supports Python 3.7+, but some apps and add-ons may still include Python 2.x code. Even if your Splunk installation has Python 3.7.17, the Upgrade Readiness... See more...
@dsky55  Splunk Enterprise 9.x officially supports Python 3.7+, but some apps and add-ons may still include Python 2.x code. Even if your Splunk installation has Python 3.7.17, the Upgrade Readiness App scans app files for deprecated Python 2.x code. Please check this below documentation for more information.  https://docs.splunk.com/Documentation/Splunk/9.4.0/UpgradeReadiness/ResultsPython 
@AJH2000  Did you try checking with Postman or cURL?"
@michael_vi  rex mode=sed "s/\*{10,}[\s\S]*?\*{10,}\n//g" → Removes everything between (and including) **************************************. You can apply the configurations in props.conf and... See more...
@michael_vi  rex mode=sed "s/\*{10,}[\s\S]*?\*{10,}\n//g" → Removes everything between (and including) **************************************. You can apply the configurations in props.conf and transforms.conf props.conf [YOUR_SOURCETYPE] TRANSFORMS-remove_header = remove_header_content transforms.conf  [remove_header_content] REGEX = \*{10,}[\s\S]*?\*{10,}\n FORMAT = DEST_KEY = _raw  
@michael_vi You can try regex to meet your requirement.   
What have you tried so far?  How did those results not meet expectations? Have you experimented with https://regex101.com?
When setting a value for the MetaData:Sourcetype key, the value MUST be prefixed with "sourcetype::".   [set_sourcetype_1] REGEX =myhost\.pl DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::myty... See more...
When setting a value for the MetaData:Sourcetype key, the value MUST be prefixed with "sourcetype::".   [set_sourcetype_1] REGEX =myhost\.pl DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::mytype1 WRITE_META = true See https://docs.splunk.com/Documentation/Splunk/9.4.0/Admin/Transformsconf#KEYS:  
Hello Team, 9.4.0, thsooting prod, replicated the issue in staging, i have 1 indexer only. Performing all searches on that indexer: - when i search for "index=index1 sourcetype=mytype1" i got 0 res... See more...
Hello Team, 9.4.0, thsooting prod, replicated the issue in staging, i have 1 indexer only. Performing all searches on that indexer: - when i search for "index=index1 sourcetype=mytype1" i got 0 results - when i search for "index=index1" i got 1000 results and can see all of those are of sourcetype=mytype1 - when i search for "index=index1 | stats count by sourcetype" can see 0 statistics - when looking at those events manually - all of them are of sourcetype=mytype1. - checked job inspector, all looks good, nothing special I am admin. Full access. Searching with 15 min all all time (no difference, the same results) Sourcetype "mytype1" has been created by transforms: [set_sourcetype_1] REGEX =myhost\.pl DEST_KEY = MetaData:Sourcetype FORMAT = mytype1 WRITE_META = true No other definition of that sourcetype anywhere else (should i add it somewhere ??) What is wrong ? Why can not i search by sourcetype ? Thanks,
I also used this method ; it is very simple and engenious. thanks
Hi. I have a file that I want to remove portion of it during index time. Remove all the text between ************************************** For example: ******************************************... See more...
Hi. I have a file that I want to remove portion of it during index time. Remove all the text between ************************************** For example: ********************************************************************** Started at : 25/02/16 04:07:04 Terminated at: Elapsed time : Software: Version: 6.0.0.0 Built : 6.0.0.0.20141102.1-Release_ 14/11/02 10:06:52 Context: Account: SOC Machine: NEW IP addr: 255.555.543 CPU : Dual-Core LOG Recycle Count: ********************************************************************** 25/02/16 04:07:04.834 | 7904 | TEST1 25/02/16 04:07:04.834 | 7904 | TEST2 25/02/16 04:07:04.865 | 7860 | TEST3 25/02/16 04:07:04.881 | 7860 | TEST4 ...  In the end I need to get: 25/02/16 04:07:04.834 | 7904 | TEST1 25/02/16 04:07:04.834 | 7904 | TEST2 25/02/16 04:07:04.865 | 7860 | TEST3 25/02/16 04:07:04.881 | 7860 | TEST4 Please assist Thanks
The newer version is not stable right now, for example the documentation says it has enhanced workflows but there is no option available to trun it on its disabled by default. we can not open the co... See more...
The newer version is not stable right now, for example the documentation says it has enhanced workflows but there is no option available to trun it on its disabled by default. we can not open the coorelation searches because they have added versioning of searches, and you cannot open versions edited in 7.3 or piror to 8, we cant create short ids to track notables and we cant filter based on short id and many more issues.
Hi @Nawab , Notable are in a dedicated index that has the same name in bothe the versions, so there's no issue in downgrading. About Correlation Searches, it's always a best practice to save them i... See more...
Hi @Nawab , Notable are in a dedicated index that has the same name in bothe the versions, so there's no issue in downgrading. About Correlation Searches, it's always a best practice to save them in a dedicated app, not in the Enterprise Security App, but anyway they are in the local folders so the new installation does,'t touch them. But the most safe approach is to ask to Splunk Support. Only for my information: why do you want to downgrade? Ciao. Giuseppe