All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Maintenance mode is one parameter in server.conf file. So when you copy it to target it will be there. Then just disable maintenance mode and it will removed from server.conf. If you change both nam... See more...
Maintenance mode is one parameter in server.conf file. So when you copy it to target it will be there. Then just disable maintenance mode and it will removed from server.conf. If you change both name and ip there could be issues as all peers and SHs are using name or ip to recognize the cluster! I’m not 100% sure if the peers is actually recognized by GUID, but I almost proposed you to do offline not online migration and you need change this to all peers before start them. Also same for other components/nodes.
Thanks. It looks like you migrate the IP but keep the DNS names. We'll be moving both. If we issue maintenance mode on the old Cluster Manager, then migrate, how would we ensure the maintenance mode ... See more...
Thanks. It looks like you migrate the IP but keep the DNS names. We'll be moving both. If we issue maintenance mode on the old Cluster Manager, then migrate, how would we ensure the maintenance mode is lifted after moving to the new one? 
Hi @mattymo , Thanks for your detailed explanation. What format would be good to get json data extracted automatically onto Splunk? I can suggest the sender to follow that format if possible. And i... See more...
Hi @mattymo , Thanks for your detailed explanation. What format would be good to get json data extracted automatically onto Splunk? I can suggest the sender to follow that format if possible. And is there will be any problem if they remove that unwanted matter like date time?? And they want all json fields values to be extracted not specific and it would be difficult to write regex for all of them.
@rohithvr19  The python file should be in bin folder of your app. Can you please confirm whether the individual script is working fine? Have you tried my shared app on your local machine? If my ap... See more...
@rohithvr19  The python file should be in bin folder of your app. Can you please confirm whether the individual script is working fine? Have you tried my shared app on your local machine? If my app is working fine then try to add your Python code to this app.    Thanks KV
You probably have looked this https://dev.splunk.com/enterprise/tutorials/module_setuppages/plansetup ? One place where you could look help is CIM app. Another one is TA for *nix where are modified ... See more...
You probably have looked this https://dev.splunk.com/enterprise/tutorials/module_setuppages/plansetup ? One place where you could look help is CIM app. Another one is TA for *nix where are modified those inputs values on setup screen.
Hi, Looking at the activity of the Splunkd threads on the indexers, I've seen in the monitoring console that sometimes there is no activity for a period of 1 minute. Is this normal? evidence  ... See more...
Hi, Looking at the activity of the Splunkd threads on the indexers, I've seen in the monitoring console that sometimes there is no activity for a period of 1 minute. Is this normal? evidence   Regards, thank you very much  
@Eldemallawy  1. Try this (gives the amount of license used for indexes) index=_internal sourcetype=splunkd source=*license_usage.log type=Usage | stats sum(b) as bytes by idx | eval mb=round(b... See more...
@Eldemallawy  1. Try this (gives the amount of license used for indexes) index=_internal sourcetype=splunkd source=*license_usage.log type=Usage | stats sum(b) as bytes by idx | eval mb=round(bytes/1024/1024,3) If you want overall, then you can use this timechart version index=_internal sourcetype=splunkd source=*license_usage.log type=Usage | timechart span=1d sum(b) as usage_mb | eval usage_mb=round(usage_mb/1024/1024,3) For per index, you can use this index=_internal sourcetype=splunkd source=*license_usage.log type=Usage | bucket span=1d _time | stats sum(b) as bytes by _time idx | eval mb=round(bytes/1024/1024,3) 2. Setup a Monitoring Console:- https://docs.splunk.com/Documentation/Splunk/latest/DMC/DMCoverview 
Hey,  So I have a playbook that fetches multiple files and adds them to the soar vault. I can then send each individual file to Jira by specifying the files vault_id in the update_ticket action on t... See more...
Hey,  So I have a playbook that fetches multiple files and adds them to the soar vault. I can then send each individual file to Jira by specifying the files vault_id in the update_ticket action on the Jira app. Ideally I would like to send only one file over to Jira, an archive containing each of the other files. I can create a file and add it to the archive after seeing this post - https://community.splunk.com/t5/Splunk-SOAR/SOAR-Create-File-from-Artifacts/m-p/581662 However, I don't know how I could take each individual file from the vault and add it to this archive before I sent it over. Any help would be appreciated! Thanks
I am building a Splunk dashboard that displays a table of content, once it's displayed I want to have couple of buttons as Stop All and Start All, while clicking the same this in turn execute a searc... See more...
I am building a Splunk dashboard that displays a table of content, once it's displayed I want to have couple of buttons as Stop All and Start All, while clicking the same this in turn execute a search to invoke a Python code to perform the actions. Please can someone guide if that's possible?
Thank you for the advise. We will proof it with the customer as soon as I can and will respond.
hi, Wondering if there is a document or guidance on how to estimate the  volume of data ingested in Splunk by pulling data from DNA Centre using the Splunk Add-on: Cisco DNA Center Add-on. Cheers, ... See more...
hi, Wondering if there is a document or guidance on how to estimate the  volume of data ingested in Splunk by pulling data from DNA Centre using the Splunk Add-on: Cisco DNA Center Add-on. Cheers, Ahmed.
Note: I have an active token that looks similar to this: c0865140-53b4-4b53-a2d1-9571d39a5de8 My HTTP request has the following header: Authorization: Splunk c0865140-53b4-4b53-a2d1-9571d39a5de8 ... See more...
Note: I have an active token that looks similar to this: c0865140-53b4-4b53-a2d1-9571d39a5de8 My HTTP request has the following header: Authorization: Splunk c0865140-53b4-4b53-a2d1-9571d39a5de8 MY Splunk Cloud settings show HEC configuration to have SSL enabled and port 8088 (though these settings are grayed out and cannot be adjusted)
Hi Ismo, I am working on developing an app that updates the values in the inputs.conf file from the setup.xml configuration. Additionally, the app retrieves values from the inputs.conf file and load... See more...
Hi Ismo, I am working on developing an app that updates the values in the inputs.conf file from the setup.xml configuration. Additionally, the app retrieves values from the inputs.conf file and loads them into Splunk.
Hi all, I just started a trial for Splunk Cloud , my URL looks similar to this: https://prd-p-s8qvw.splunkcloud.com/en-GB/app/launcher/home   I want to get data in with the HEC. I have read all t... See more...
Hi all, I just started a trial for Splunk Cloud , my URL looks similar to this: https://prd-p-s8qvw.splunkcloud.com/en-GB/app/launcher/home   I want to get data in with the HEC. I have read all the following documentation: https://docs.splunk.com/Documentation/SplunkCloud/9.3.2408/Data/UsetheHTTPEventCollector#Configure_HTTP_Event_Collector_on_Splunk_Cloud_Platform According to the documentation, my URL should look like this: https://http-inputs-prd-p-s8qvw.splunkcloud.com:8088/services/collector/event However this does not work. It seems the DNS cannot be resolved. My NodeJS gives "ENOTFOUND" I have tried different options (HHTP / HTTPS, host, port etc): HTTP: http://http-inputs-prd-p-s8qvw.splunkcloud.com:8088/services/collector/event HTTPS: https://http-inputs-prd-p-s8qvw.splunkcloud.com:8088/services/collector/event GCP: http://http-inputs.prd-p-s8qvw.splunkcloud.com:8088/services/collector/event https://http-inputs.prd-p-s8qvw.splunkcloud.com:8088/services/collector/event host: http://http-inputs-prd-p-s8qvw.splunkcloud.com:8088/services/collector/event http://http-inputs-p-s8qvw.splunkcloud.com:8088/services/collector/event http://http-inputs-s8qvw.splunkcloud.com:8088/services/collector/event http://http-inputs.s8qvw.splunkcloud.com:8088/services/collector/event https://http-inputs-prd-p-s8qvw.splunkcloud.com:8088/services/collector/event https://http-inputs-p-s8qvw.splunkcloud.com:8088/services/collector/event https://http-inputs-s8qvw.splunkcloud.com:8088/services/collector/event https://http-inputs.s8qvw.splunkcloud.com:8088/services/collector/event port: http://http-inputs-prd-p-s8qvw.splunkcloud.com:443/services/collector/event https://http-inputs-prd-p-s8qvw.splunkcloud.com:443/services/collector/event http://http-inputs.prd-p-s8qvw.splunkcloud.com:443/services/collector/event https://http-inputs.prd-p-s8qvw.splunkcloud.com:443/services/collector/event http://http-inputs-prd-p-s8qvw.splunkcloud.com:443/services/collector/event http://http-inputs-p-s8qvw.splunkcloud.com:443/services/collector/event http://http-inputs-s8qvw.splunkcloud.com:443/services/collector/event http://http-inputs.s8qvw.splunkcloud.com:443/services/collector/event https://http-inputs-prd-p-s8qvw.splunkcloud.com:443/services/collector/event https://http-inputs-p-s8qvw.splunkcloud.com:443/services/collector/event https://http-inputs-s8qvw.splunkcloud.com:443/services/collector/event https://http-inputs.s8qvw.splunkcloud.com:443/services/collector/event No prefix: http://prd-p-s8qvw.splunkcloud.com:8088/services/collector/event https://prd-p-s8qvw.splunkcloud.com:8088/services/collector/event http://prd-p-s8qvw.splunkcloud.com:8088/services/collector/event https://prd-p-s8qvw.splunkcloud.com:8088/services/collector/event http://prd-p-s8qvw.splunkcloud.com:8088/services/collector/event http://p-s8qvw.splunkcloud.com:8088/services/collector/event http://s8qvw.splunkcloud.com:8088/services/collector/event http://s8qvw.splunkcloud.com:8088/services/collector/event https://prd-p-s8qvw.splunkcloud.com:8088/services/collector/event https://p-s8qvw.splunkcloud.com:8088/services/collector/event https://s8qvw.splunkcloud.com:8088/services/collector/event https://s8qvw.splunkcloud.com:8088/services/collector/event http://prd-p-s8qvw.splunkcloud.com:443/services/collector/event https://prd-p-s8qvw.splunkcloud.com:443/services/collector/event http://prd-p-s8qvw.splunkcloud.com:443/services/collector/event https://prd-p-s8qvw.splunkcloud.com:443/services/collector/event http://prd-p-s8qvw.splunkcloud.com:443/services/collector/event http://p-s8qvw.splunkcloud.com:443/services/collector/event http://s8qvw.splunkcloud.com:443/services/collector/event http://s8qvw.splunkcloud.com:443/services/collector/event https://prd-p-s8qvw.splunkcloud.com:443/services/collector/event https://p-s8qvw.splunkcloud.com:443/services/collector/event https://hs8qvw.splunkcloud.com:443/services/collector/event https://s8qvw.splunkcloud.com:443/services/collector/event None of these work. All give one of the following errors: Error: getaddrinfo ENOTFOUND http-inputs-prd-p-s8qvw.splunkcloud.com Error: read ECONNRESET HTTP 400 Sent HTTP to port 443 HTTP 404 Not Found Can anybody help me get this working?   Regards,   Lawrence
[md_time] SOURCE_KEY = _time REGEX = (.*) FORMAT = _ts=$1 DEST_KEY = time_temp [md_subsecond] SOURCE_KEY = _meta REGEX = _subsecond::(\.\d+) FORMAT = $1 DEST_KEY = subsecond_temp [md_fix_subsecond]... See more...
[md_time] SOURCE_KEY = _time REGEX = (.*) FORMAT = _ts=$1 DEST_KEY = time_temp [md_subsecond] SOURCE_KEY = _meta REGEX = _subsecond::(\.\d+) FORMAT = $1 DEST_KEY = subsecond_temp [md_fix_subsecond] INGEST_EVAL = _raw=if(isnull(subsecond_temp), time_temp + " " + _raw, time_temp + subsecond_temp + " " + _raw) [md_time_default] SOURCE_KEY = _time REGEX = (.*) FORMAT = _ts=$1 $0 DEST_KEY = _raw   The problem seems to be somewhere in md_time, md_subsecond or md_fix_subsecond, because if I use md_time_default, it works (though without subseconds), and if I enable these three instead of md_time_default, then I get no output: the packets emitted by Splunk seem to be empty: without a payload.
Old post, but managed without Javascript and CSS oneliner. Set a table ID like (my_table_with_default_sort), In CSS, apply a fixed or dynamic data-sort-key to the colomn you want to sort. In this ... See more...
Old post, but managed without Javascript and CSS oneliner. Set a table ID like (my_table_with_default_sort), In CSS, apply a fixed or dynamic data-sort-key to the colomn you want to sort. In this example token is used matches with the sort used in the query. #my_table_with_default_sort table th[data-sort-key="$tok_sort_column$"] a i::before content: "↥"; /* for ASC*, for DESC use "↧"; */ display: inline-block; font-size: 12px; margin-right: 5px; }    
Hi I cannot recall that there are this kind of api at least supported by Splunk. Of course you can do it by yourself if you really want, but probably there is a better way to do what you are aiming... See more...
Hi I cannot recall that there are this kind of api at least supported by Splunk. Of course you can do it by yourself if you really want, but probably there is a better way to do what you are aiming for? So what is the issue which you are trying to solve? r. Ismo
Hi Splunk Community, I am looking to edit the inputs.conf file programmatically via the Splunk API. Specifically, I want to know: Is there an API endpoint available to update the inputs.conf file? ... See more...
Hi Splunk Community, I am looking to edit the inputs.conf file programmatically via the Splunk API. Specifically, I want to know: Is there an API endpoint available to update the inputs.conf file? If yes, what would be the correct method to achieve this (e.g., required endpoint, parameters, or payload)? I understand that inputs.conf primarily configures data inputs, and certain operations might have to be performed via the REST API or directly through configuration file updates. Any documentation or examples regarding: Supported Splunk API endpoints for modifying input configurations. Best practices for editing inputs.conf programmatically. Any necessary permissions or prerequisites to perform such updates.
Then you need just system which enables this log collection for this individual system. This depends how you are managing UF configurations. If you have DS then just add a new server class for this c... See more...
Then you need just system which enables this log collection for this individual system. This depends how you are managing UF configurations. If you have DS then just add a new server class for this collection and add it into this system. If you are using something else then use it just like this. Of course you must have separate UF app which is configured for this collection. Just inputs.conf file with suitable configuration. Then when analysis has done just remove that UF app from this server with DS or your other cfg mgm system. Probably you should have separate index for these logs with short retention period to get rid of those logs as they are not needed inside splunk?
Hi @rahulkumar , I worked in a project in which we were receiving losconcentrated and exported in logstash format. the problem is that you cannot use normal add-ons because the format is different.... See more...
Hi @rahulkumar , I worked in a project in which we were receiving losconcentrated and exported in logstash format. the problem is that you cannot use normal add-ons because the format is different. You have two choices: modufy all your parsing rules of the used add-ons. Convert your logstash logs in the original format and it isn't s simple job but it's long! In few words, you have to extract metadata from the json using INGEST_EVAL and then convert in _raw the original log field. For more infos see at https://conf.splunk.com/files/2020/slides/PLA1154C.pdf Ciao. Giuseppe