All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I mean, if I change a Server Class in Deployment Server from one to another. Everything else stays the same.
Here is Splunk Validated Architectures https://docs.splunk.com/Documentation/SVA/current/Architectures/TopologyGuidance
Hi @danielbb , as also @isoutamo and @kiran_panchavat said, 8089 is a management port that cannot be used via GUI, in addition, connections using 8089 are all in https, not http. Ciao. Giuseppe
Hi @michael_vi , sorry but your question isn't so clear: what do you mean with "app class"? are you speaking od an add-on for iput data or what else? Splunk doesn't reindex twice the same data ev... See more...
Hi @michael_vi , sorry but your question isn't so clear: what do you mean with "app class"? are you speaking od an add-on for iput data or what else? Splunk doesn't reindex twice the same data even if you change the data filename. The only way to reindex an already idexed data is if you used crcSal = <SOUCE> in your inputs.conf stanzas and you changed the data filename. Final question: all the changes to a conf file (not by GUI) require a splunk restart on the machine. Ciao. Giuseppe
Wait. What do you mean by "expand to a cluster"? And what are you trying to achieve? I understand that initally you have an all-in-one installation. What architecture are you aiming at? Cluster (un... See more...
Wait. What do you mean by "expand to a cluster"? And what are you trying to achieve? I understand that initally you have an all-in-one installation. What architecture are you aiming at? Cluster (unless explicitly referenced to as SH cluster) typically means cluster of indexers with a Cluster Manager. For that you need at least a single separate SH. So for a clustered installation you need at least three nodes - one SH, one CM and at least one indexer. The first thing to do if you indeed have an AIO setup would be to add an external SH and turn your existing server into a pure indexer. After you have done that you might think of converting the indexer to a cluster node.
Hi @desmando , you have to follow the instructions at https://docs.splunk.com/Documentation/Splunk/9.4.0/Indexer/Migratenon-clusteredindexerstoaclusteredenvironment in few words: install the same... See more...
Hi @desmando , you have to follow the instructions at https://docs.splunk.com/Documentation/Splunk/9.4.0/Indexer/Migratenon-clusteredindexerstoaclusteredenvironment in few words: install the same Splunk version on the new Indexers and Cluster Manager, configure the CM as Cluster Manager node, configure IDXs as peer nodes, modify the IDX configurations for a cluster, deploy the configurations of the old IDX to both the peers using the CM, configure the SH to access the cluster. In the CM, you should see both the IDXs and all the indexes replicated. Remember that only new data are replicated between the IDXs, old ones aren't replicated, To replicate also old data, you need a Splunk Professional Services or a Certified Core Consultant. Ciao. Giuseppe
Hi all, A general question that I couldn't find an answer to... If I change for the certain app class from one to another, and restart splunkd, will there be any affect on indexing? I mean will it... See more...
Hi all, A general question that I couldn't find an answer to... If I change for the certain app class from one to another, and restart splunkd, will there be any affect on indexing? I mean will it re-index the same data or a portion of it twice?  Or, since it's the same a app and same source, maybe there is no need to restart splunkd? Thanks 
Hi @arusoft , the best approach should be to move all these knowledge objects in one or more custom apps, package them and upload them to your Splunk Cloud instance. The only issue that you could f... See more...
Hi @arusoft , the best approach should be to move all these knowledge objects in one or more custom apps, package them and upload them to your Splunk Cloud instance. The only issue that you could find is if you used some special thing as scripts, custom commands , etc... because they aren't accepted in SC. In addition, you have to manually add to all dashboards, in the first row  version="1.1" for this reason, maybe the process could be: group all your knowledge objects in one or more custom apps, install them in a stand alone 9.x Splunk instance, make all the changes to the dashboard, use the Upgrade Readiness App ( https://splunkbase.splunk.com/app/5483 ) in this system to highlight eventual anomalies, package your custom apps, upgrade one by one on SC, see the upgrade reports to identify eventual additional errors to solve. Ciao. Giuseppe
This "glues" the extracted timestamp with the remainder of the event. Add a space between $1 and $0 FORMAT = _ts=$1$0  
As mentioned, ITEW is the same package as ITSI but it feels like you can only use 10% of it for free without buying a license.
Hi @manhdt , as I said, ITSI is a premium App, this means that you need an additional license for it, the normal Splunk Enterprise license isn't sufficient. The solutions to see ITSI are the ones I... See more...
Hi @manhdt , as I said, ITSI is a premium App, this means that you need an additional license for it, the normal Splunk Enterprise license isn't sufficient. The solutions to see ITSI are the ones I described in my previous post. If instead you acquired an ITSI license for yourself, you must receive the link to download it, if you acquired for a final customer, the eMail arrived to it. Ciao. Giuseppe
Hi all, I'm in the process of migrating our single hosted Splunk installation to a new server. After setting up a new Splunk instance and feeding it data from a few devices, I notice an oddity I nev... See more...
Hi all, I'm in the process of migrating our single hosted Splunk installation to a new server. After setting up a new Splunk instance and feeding it data from a few devices, I notice an oddity I never noticed before. Logging in and getting to search & reporting all works at the expected speed. But every time I start a new search, 18 to 19 seconds are spend with a POST call to the URL (host and user obfuscated) https://hostname/en-US/splunkd/__raw/servicesNS/myusername/search/search/ast The result is always a 200, but it always takes those 18 to 19 seconds to finish. When I have the results, everything is fast: selections in the timeline, paging through results and changing the "results per page" value. It seems like the system is trying something, runs into a timeout and then proceeds with normal work, but I cannot figure out what that would be. I have not done much customizations yet, but we are in a heavily firewalled environment. Am I overlooking something here?  
Hi @skramp , I am currently using a license Splunk Enterprise, but at the moment I do not have the email information used to purchase the license.
UPDATE: Here are the setPushEventSettings public IPs that need to be whitelisted: setPushEventSettings Sup friends, So I just came across this Bitdefender issue and here's what worked for me: ... See more...
UPDATE: Here are the setPushEventSettings public IPs that need to be whitelisted: setPushEventSettings Sup friends, So I just came across this Bitdefender issue and here's what worked for me: 1. Ensure your HEC endpoint supports TLS 1.2 (it most certainly does):   openssl s_client -connect http-inputs-namehere.splunkcloud.com:443 -tls1_2   2. Ensure your Splunk Cloud HEC access for ingestion IP Allow List has the IP ranges for IPs Bitdefender Cloud API responses - Splunk Cloud > Settings > Server settings > IP allow list - I'm still not sure what they are, but you probably get these from Bitdefender Support. 3. Ensure the integration command is properly formatted - if your stack is on GCP the HEC URL will be different. I believe it would be http-inputs.namehere.splunkcloud.com. My example below is for stacks hosted in AWS (more info on that here https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/UsetheHTTPEventCollector   curl -k -X POST https://cloud.gravityzone.bitdefender.com/api/v1.0/jsonrpc/push -H 'authorization: Basic <Auth header base64>' -H 'cache-control: no-cache' -H 'content-type: application/json' -d '{"params": {"status": 1, "serviceType": "splunk", "serviceSettings": {"url": "https://http-inputs-namehere.splunkcloud.com:443/services/collector", "requireValidSslCertificate": false, "splunkAuthorization": "Splunk <Splunk Cloud HEC Token>"}, "subscribeToEventTypes": {"hwid-change": true,"modules": true,"sva": true,"registration": true,"supa-update-status": true,"av": true,"aph": true,"fw": true,"avc": true,"uc": true,"dp": true,"device-control": true,"sva-load": true,"task-status": true,"exchange-malware": true,"network-sandboxing": true,"malware-outbreak": true,"adcloud": true,"exchange-user-credentials": true,"exchange-organization-info": true,"hd": true,"antiexploit": true}}, "jsonrpc": "2.0", "method":"setPushEventSettings", "id": "1"}'   So without the Bitdefender IPs, I had to test by opening up the HEC allow list with 0.0.0.0/0 (takes a couple of  minutes for the change to take effect), getting a successful response, and then immediately removing it, but this will let you know if this is the issue. Or you could wait to get the IPs from Bitdefender. If you do get a successful response, you can send a test event with this:   curl -k -X POST https://cloud.gravityzone.bitdefender.com/api/v1.0/jsonrpc/push -H 'authorization: Basic <Auth header base64>' -H 'cache-control: no-cache' -H 'content-type: application/json' -d '{"params": {"eventType": "av"}, "jsonrpc": "2.0", "method": "sendTestPushEvent", "id": "3"}'   Hope this helps!
What is the fastest way to migrate Splunk objects dashboard , alerts, reports from one of these old version ( 6.5, 7)  to latest cloud.  . thanks 
Thanks for your help with this. In the meantime I've run into another problem. Could you please help me? This is the topic: https://community.splunk.com/t5/Getting-Data-In/conditional-whitespace-in... See more...
Thanks for your help with this. In the meantime I've run into another problem. Could you please help me? This is the topic: https://community.splunk.com/t5/Getting-Data-In/conditional-whitespace-in-transform/m-p/708831
Hello everyone! I am experimenting with the SC4S transforms that are posted here: https://splunk.github.io/splunk-connect-for-syslog/main/sources/vendor/Splunk/heavyforwarder/ My goal is to send... See more...
Hello everyone! I am experimenting with the SC4S transforms that are posted here: https://splunk.github.io/splunk-connect-for-syslog/main/sources/vendor/Splunk/heavyforwarder/ My goal is to send the logs to a syslog-ng instance running with a custom config. My current problem is that the SC4S config contains a part where it checks for subseconds, and appends the value to the timestamp, if found.   [metadata_source] SOURCE_KEY = MetaData:Source REGEX = ^source::(.*)$ FORMAT = _s=$1 $0 DEST_KEY = _raw [metadata_sourcetype] SOURCE_KEY = MetaData:Sourcetype REGEX = ^sourcetype::(.*)$ FORMAT = _st=$1 $0 DEST_KEY = _raw [metadata_index] SOURCE_KEY = _MetaData:Index REGEX = (.*) FORMAT = _idx=$1 $0 DEST_KEY = _raw [metadata_host] SOURCE_KEY = MetaData:Host REGEX = ^host::(.*)$ FORMAT = _h=$1 $0 DEST_KEY = _raw [metadata_time] SOURCE_KEY = _time REGEX = (.*) FORMAT = _ts=$1$0 DEST_KEY = _raw [metadata_subsecond] SOURCE_KEY = _meta REGEX = \_subsecond\:\:(\.\d+) FORMAT = $1 $0 DEST_KEY = _raw   In my case however, when it's not found, the timestamp field will not get a whitespace appended, and thus it will be practically concatenated with the following field, which is not what I want. How could set up the config so that there will always be a whitespace before the next field (the host/_h field)? I tried adding an extra whitespace in front of the _h in the FORMAT part of the metadata_host stanza, but that seems to be ignored. This is what I see:   05:58:07.270973 lo In ifindex 1 00:00:00:00:00:00 ethertype IPv4 (0x0800), length 16712: (tos 0x0, ttl 64, id 49071, offset 0, flags [DF], proto TCP (6), length 16692) 127.0.0.1.49916 > 127.0.0.1.cslistener: Flags [.], cksum 0x3f29 (incorrect -> 0x5743), seq 1:16641, ack 1, win 260, options [nop,nop,TS val 804630966 ecr 804630966], length 16640 0x0000: 0800 0000 0000 0001 0304 0006 0000 0000 ................ 0x0010: 0000 0000 4500 4134 bfaf 4000 4006 3c12 ....E.A4..@.@.<. 0x0020: 7f00 0001 7f00 0001 c2fc 2328 021a 7392 ..........#(..s. 0x0030: 486d 209f 8010 0104 3f29 0000 0101 080a Hm......?)...... 0x0040: 2ff5 b1b6 2ff5 b1b6 5f74 733d 3137 3336 /.../..._ts=1736 0x0050: 3931 3730 3739 5f68 3d73 706c 756e 6b2d 917079_h=splunk- 0x0060: 6866 205f 6964 783d 5f6d 6574 7269 6373 hf._idx=_metrics 0x0070: 205f 7374 3d73 706c 756e 6b5f 696e 7472 ._st=splunk_intr    This is the interesting part:   0x0040: 2ff5 b1b6 2ff5 b1b6 5f74 733d 3137 3336 /.../..._ts=1736 0x0050: 3931 3730 3739 5f68 3d73 706c 756e 6b2d 917079_h=splunk- 0x0060: 6866 205f 6964 783d 5f6d 6574 7269 6373 hf._idx=_metrics   The _h will come right after the end of the _ts field, without any clear separation.
I set up DoDBanner when logging in by putting the following in Web.conf. It was displayed in Splunk 9.2.1, but it was no longer displayed in Splunk 9.2.2. $SPLUNK_HOME$\etc\system\local\web.conf [... See more...
I set up DoDBanner when logging in by putting the following in Web.conf. It was displayed in Splunk 9.2.1, but it was no longer displayed in Splunk 9.2.2. $SPLUNK_HOME$\etc\system\local\web.conf [settings] login_content = <script>function DoDBanner() {alert("Hello World");}DoDBanner();</script> Is DoDBanner no longer supported from a certain version?
This is actually a MSI quirk. I've seen it happen with various softwares over last 20+ years. And yes, it's frustrating.
Hi @gcusello , I am currently using a license Splunk Enterprise, but at the moment I do not have the email information used to purchase the license.