All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

There are two possible approaches: 1. Migrate existing installation to a new OS and then "upgrade" it to cluster. 2. Do the cluster first adding new components with a new OS. Both have their pros ... See more...
There are two possible approaches: 1. Migrate existing installation to a new OS and then "upgrade" it to cluster. 2. Do the cluster first adding new components with a new OS. Both have their pros and cons. Theoretically, Splunk advises that all components run "the same OS and version" whatever that means. Practically, of course that's impossible to keep this requirement throughout the whole system lifecycle if not for any other reason, just because mid-upgrade you have some part of your cluster with an earlier version, some with later. Also since Splunk doesn't really rely on too many components of the OS itself, it shouldn't matter that much as long as you're in the supported OS range. (but yes, it can cause issues with support involvement should you need their help). OTOH, if you try to fiddle with your only existing installation (as in "migrate first, clusterize later") you have of course additional risks from manipulating your only instance. If you feel confident in your backups that might be better from the supportability point of view. But of course it involves possibly longer downtimes.
Here is one old post which is discussing this issue https://community.splunk.com/t5/Splunk-Search/How-to-find-computers-which-stopped-sending-logs/m-p/694544. It contains one example and several link... See more...
Here is one old post which is discussing this issue https://community.splunk.com/t5/Splunk-Search/How-to-find-computers-which-stopped-sending-logs/m-p/694544. It contains one example and several links to other resources and apps to handle this.
Good morning ITWhisperer, Many thanks for the suggestion. This might actually work. I could first evaluate number of fields and then use mvindex. Will try that. Again, many thank. Kind Regards, ... See more...
Good morning ITWhisperer, Many thanks for the suggestion. This might actually work. I could first evaluate number of fields and then use mvindex. Will try that. Again, many thank. Kind Regards, Mike.
Field names are case sensitive - try using productId rather than productid
Hey Guys, I have a input that is monitoring a log from syslog. In this file theres data of multiple severity, its bad, but I was thinking I could use a transform to set sourcetype in props that I ... See more...
Hey Guys, I have a input that is monitoring a log from syslog. In this file theres data of multiple severity, its bad, but I was thinking I could use a transform to set sourcetype in props that I could use to format data. So I did this in inputs.conf: [udp://x.x.x.x:5514] index=cisco_asa sourcetype=cisco_firewall disabled=false and this logs from cisco asa Sep 20 15:36:41 10.10.108.122 %ASA-4-106023: Deny tcp src inside:x.x.x.x/xxxx dst outside:x.x.x.x/xxxx by access-group "Inside_access_in" [0x51fd3ce2, 0x0] Sep 20 15:36:37 10.10.108.122 %ASA-5-746015: user-identity: [FQDN] go.microsoft.com resolved x.x.x.x Sep 20 15:36:37 10.10.108.122 %ASA-6-302021: Teardown ICMP connection for faddr x.x.x.x/x gaddr x.x.x.x/x laddr x.x.x.x/x type 8 code 0 then I created a transforms.conf [set_log_type_critical] source_key = _raw regex = .*%ASA-4 dest_key=MetaData:Sourcetype format=sourcetype::cisco:firewall:alert [set_log_type_error] source_key = _raw regex = .*%ASA-5 dest_key=MetaData:Sourcetype format=sourcetype::cisco:firewall:critical [set_log_type_warnig] source_key = _raw regex = .*%ASA-6 dest_key=MetaData:Sourcetype format=sourcetype::cisco:firewall:error I also have a props that looks like [cisco:firewall] TRANSFORMS-setlogtype_alert=set_log_tyoe_critical TRANSFORMS-setlogtype_critical=set_log_tyoe_error TRANSFORMS-setlogtype_error=set_log_tyoe_warning My question is this: after all that I configured it, but the sourcetype separation is still not possible Do transforms and props look correct? Im testing locally so I can break things all day long. thanks for the assistance
mvcount will give you the number of values in a multivalue field - does that help?
Yes, I also agree with @PickleRick, but sadly I need to cook with what I currently have... We have an on-Prem Standalone. OS must be replaced (Centos7) and even the hardware warranty will expire at ... See more...
Yes, I also agree with @PickleRick, but sadly I need to cook with what I currently have... We have an on-Prem Standalone. OS must be replaced (Centos7) and even the hardware warranty will expire at the end of this year. We have our virtual environment and S3 as well. (I have system architect colleagues, but they are not "Spunk-related" ones.) I have similar plans as you describe. There is only one major difference. I plan to set up 2-3 heavy forwarders and migrate the current inputs. I can do this one by one and fast, without a huge outage. I will set up the new deployment parallelly, and when everything looks okay, I will redirect the HFs to the new deployment. Only the cold buckets are "problematic" now. But we still can keep the old environment without new input, and we can search historical data if needed, once it expires we stop the Standalone... Thank you for the insights!
Thank you!
Do you have any code example based on your explanation?  That would really help me
Hi @Mallika1217 , linkedin profile? ... not able to understand your question.. pls some more details..  do you want to get notified about a particular splunk app/add-on?
Hi @mayurr98,  Is it ingesting exactly 10,000 events per hour or is that figure rounded?  The query is set to run four times per hour. Does it return 2,500 events per run, or does the first run ret... See more...
Hi @mayurr98,  Is it ingesting exactly 10,000 events per hour or is that figure rounded?  The query is set to run four times per hour. Does it return 2,500 events per run, or does the first run return 10,000 and the rest 0?  Are there any error messages in the internal logs? index=_internal sourcetype=dbx_* Also, how much more than 10,000 events are we talking here? There could be a few different causes depending on the answers to the above questions. First check there are no other random db_inputs.conf files overwriting your above settings. Perhaps use btool to check this. Next, as DB Connect v3 and above uses HEC under the hood, use btool to check limits.conf to see if the max_content_length setting is too restrictive.  Depending on what you find in the logs, you might also consider reducing the max_rows and/or fetch_size settings and/or decreasing the interval. Counterintuitive as it sounds, this might actually improve number of events returned, if there is some performance bottleneck preventing the query from completing.  Regards, K
Hi @JWGOV , As per the documentation here, for Oracle database inputs you may use any of varchar, int, float, real, bigint, number, timestamp, datetime, or date for the rising column.  You shouldn't ... See more...
Hi @JWGOV , As per the documentation here, for Oracle database inputs you may use any of varchar, int, float, real, bigint, number, timestamp, datetime, or date for the rising column.  You shouldn't really have to do any other formatting of data so long as the rising column is in one of those types, but... Without seeing the actual data causing the ORA-01843 errors it's hard to say where the root cause lies, but consider using TO_NUMBER to convert the EVENT_TIMESTAMP field from a timestamp to a number, which would at least avoid this specific error.  Although, you still might have problems if some bad data lands in the EVENT_TIMESTAMP field and can't be converted to number, or if it is converted to a number from a different timestamp format which might result in a checksum error.  Are you able to show an example when one of those errors occurred? 
Is there any linkdn profile to get all splunk updates and related cyber threats??
Hi @isoutamo thanks for your reply, Unfortunately the TA is not supported by Splunk, and Splunk support told me as much when I raised a case with them and suggested I try the forum.  I did find thi... See more...
Hi @isoutamo thanks for your reply, Unfortunately the TA is not supported by Splunk, and Splunk support told me as much when I raised a case with them and suggested I try the forum.  I did find this post which seems to be a very similar issue, so we are going to try the solution there of using the Splunk Add-on for Microsoft Cloud Services (which IS Splunk supported!) to ingest the events with HEC via Event Hub.  I will reply again with any updates    
Hi @hazem , I'm not an exper in Load Balancers, and, as @isoutamo said, it depends on the Load Balancer: ask this question to a specialist of your LB. Ciao. Giuseppe. P.S.: Karma Points are appre... See more...
Hi @hazem , I'm not an exper in Load Balancers, and, as @isoutamo said, it depends on the Load Balancer: ask this question to a specialist of your LB. Ciao. Giuseppe. P.S.: Karma Points are appreciated by all the contributors
I had this same issue on a new install of Splunk, clients that still didnt have universal forwarder remove were connecting to this new instance. After removing the UF from those machines I was trying... See more...
I had this same issue on a new install of Splunk, clients that still didnt have universal forwarder remove were connecting to this new instance. After removing the UF from those machines I was trying to delete from client list and was receiving this message and would not go away. I needed to reenable deployment server on the new instance in order for me to delete clients by running the following command. Worked for me hope it helps. sudo /opt/splunk/bin/splunk enable deploy-server sudo /opt/splunk/bin/splunk restart
Unfortunately no. The dashboard studio PDF export function will put the entire dashboard on a single page. The older XML dashboards could at least split their exported PDFs into pages. You may find ... See more...
Unfortunately no. The dashboard studio PDF export function will put the entire dashboard on a single page. The older XML dashboards could at least split their exported PDFs into pages. You may find some success in exporting the dashboard studio PDF as a single page, then "print to PDF" with the "Page Sizing" set to "Poster". This should split your single-page PDF to a multi-page PDF, but it will likely take a lot of trial-and-error to get the formatting right. Another workaround is to have multiple dashboards (Part1,Part2,Part3 ... etc) and export PDFs for each of them, then combine them by printing them all to a single PDF.
Are you able to re-create the dashboard by copying the source code of that dashboard into a new dashboard both within and outside the ITSI app? This would at least show that there isn't an outside it... See more...
Are you able to re-create the dashboard by copying the source code of that dashboard into a new dashboard both within and outside the ITSI app? This would at least show that there isn't an outside item creating that gap. Then there must be something in the dashboard source code that you can adjust to change the gap height. I don't have ITSI in my test machine so I could not test it myself.
There are some troubleshooting steps you could try: 1. Use a different browser 2. Try to edit other macros 3. Try to add a new macro 4. Try to edit other knowledge objects, like field extractions... See more...
There are some troubleshooting steps you could try: 1. Use a different browser 2. Try to edit other macros 3. Try to add a new macro 4. Try to edit other knowledge objects, like field extractions, dashboards, etc 5. Make a new user with very high permissions (e.g. admin) and try editing the macro with it 6. Install a new search head, connect it to your indexers, then edit the macro
It is hard to say. If you must know what happened, then you could try installing Splunk into the drive again after formatting the drive to the state it was before install, and then see if it creates ... See more...
It is hard to say. If you must know what happened, then you could try installing Splunk into the drive again after formatting the drive to the state it was before install, and then see if it creates the problem again.