All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @jbanAtSplunk, this isn't a question for the Community but for a Splunk Architect. Anyway, there are many other parameters to answer to your question: is there an Indexer Cluster, if yes what... See more...
Hi @jbanAtSplunk, this isn't a question for the Community but for a Splunk Architect. Anyway, there are many other parameters to answer to your question: is there an Indexer Cluster, if yes what's the Search Factor and The Replication Factor? is there a Search Head Cluster, are there Premium App as Enterprise Security or ITSI? how many concurrent users you foresee in the system? are there scheduled searches? Anyway, if you don't have ES or ITSI, you couls use around 3 Indexers. If you don't have a Search Head Cluster you can use one Search Head, if you have a Search Head Cluster you need at least three SHs and a Deployer, If you have an Indexer Cluster you need at least 3 Indexers and one Cluster Manager. If you have ES or ITSI the resources are completely different! For storage: if you don't have an Indexer Cluster you could consider: Storage = License*retention*0.5 = 500*30*0.5 = 7500 GB If you have an indexer Cluster the required storage depends on the above factors. About CPUs and RAMs: they depends on: presence of Premium App, number of concurrent users number of scheduled searches, so I cannot help you without these information, the only hint is to see at this url the reference hardware: https://docs.splunk.com/Documentation/Splunk/9.1.1/Capacity/Referencehardware  Ciao. Giuseppe
Whenever I enable index clustering on what is to be my splunk manager, I go to restart it and it never comes back on.  Disabling index clustering through the cli returns access to the gui and allows ... See more...
Whenever I enable index clustering on what is to be my splunk manager, I go to restart it and it never comes back on.  Disabling index clustering through the cli returns access to the gui and allows splunk to start like normal.  Journalctl returns the following (after trying to start splunk with "systemctl start splunk")... > splunk[9423]: Waiting for web server at https://127.0.0.1:8000 to be available......... > splunk[9423]: WARNING: web interface does not seem to be available! > systemd[1]: splunk.service: control process exited, code=exited status=1 > systemd[1]: Failed to start Splunk Enterprise   Trying to start splunk from the binary returns the following... > Checking http port [8000]: not available > ERROR: http port [8000] - no permision to use address/port combination.  Splunk needs to use this port.   I've reinstalled splunk and rebuilt the VM that splunk is sitting on and neither of these have worked.
The recommended hardware does not change as ingestion changes.  Scale by adding instances rather than by adding resources. The number of search heads is a function of the number of searches to run a... See more...
The recommended hardware does not change as ingestion changes.  Scale by adding instances rather than by adding resources. The number of search heads is a function of the number of searches to run and the number of users to support.  The number of indexers is related to the rate of ingestion, but also must consider the number of searches to run (remember that indexers save data and search it). Storage needs is not just the retention period times the amount ingested each day.  Consider also replication of data among indexers, datamodel accelerations (which can consume a lot of space), and data compression. There is an app that can help.  See https://splunkbase.splunk.com/app/5176 and engage your Splunk account team as they are experts at this.
Hello all,  I could use some help here with creating a search. Ultimately I would like to know if a user is added to a specific set of security groups what security groups if any were removed from ... See more...
Hello all,  I could use some help here with creating a search. Ultimately I would like to know if a user is added to a specific set of security groups what security groups if any were removed from that same user.  Here is a search for security group removal: index=wineventlog EventCode=4729 EventCodeDescription="A member was removed from a security-enabled global group" Subject_Account_Name=srv_HiveProvSentryNe OR Subject_Account_Name=srv_HiveProvSentry source="WinEventLog:Security" sourcetype=WinEventLog | table member, Group_Name, Subject_Account_Name, _time Here is a search for security group added: index=wineventlog EventCode=4728 EventCodeDescription="A member was Added to a security-enabled global group" Subject_Account_Name=srv_HiveProvSentryNe OR Subject_Account_Name=srv_HiveProvSentry source="WinEventLog:Security" sourcetype=WinEventLog | table member, Group_Name, Subject_Account_Name, _time additional search info: EventCode=4728 Added EventCode=4729 Removed Group_Name - security group Subject_Account_Name - prov sentry member - user security groups, I would like to monitor users being added to: RDSUSers_GRSQCP01 RDSUSers_GROQCP01 RDSUSers_BRSQCP01 RDSUSers_BROQCP01 RDSUSers_VRSQCP01 RDSUSers_VROQCP01 Again I am looking to monitor if a user was added to any of the above 6 security groups were they within a few hours before and ahead of the event removed from any other groups. let me know if I can provide any additional info and as always thank you for the help.
@phanTom we are running version 6.0.0.114895 so basically we fit the scope of the Known issue you are referring to. It is good to know that this page exists, I had no idea so far. Thank you! It seem... See more...
@phanTom we are running version 6.0.0.114895 so basically we fit the scope of the Known issue you are referring to. It is good to know that this page exists, I had no idea so far. Thank you! It seems that upgrading to the latest release 6.1.1 would do the trick and get us rid of this 30d rotation, don't you think?
@schimpanze what version are you on? IIRC there was a bug where automation tokens got auto rotated every 30 days, so you may have fell victim to this?   It will be on the Known Issues page of the r... See more...
@schimpanze what version are you on? IIRC there was a bug where automation tokens got auto rotated every 30 days, so you may have fell victim to this?   It will be on the Known Issues page of the release version you have if you want to check. 
Hi, if we upgrade license to 500GB. What is best practice Hardware architecture (CPU +RAM) based and number of "N" Search Heads, "N" indexers. How much storage per indexer we need if let's say rete... See more...
Hi, if we upgrade license to 500GB. What is best practice Hardware architecture (CPU +RAM) based and number of "N" Search Heads, "N" indexers. How much storage per indexer we need if let's say retention is 30 days and "N" installed indexers. or if at least you can share where is good .pdf for me to read with those answers. Thank you.
Excluding 2200-0700 is the same as including 0700-2159, which is what the cron schedule I offered does.
No, that doesn't work.  I believe the reason it doesn't work is because it is just attempting to change the value of the equation (end - start) to a string, and that value appears to be empty for som... See more...
No, that doesn't work.  I believe the reason it doesn't work is because it is just attempting to change the value of the equation (end - start) to a string, and that value appears to be empty for some reason. I appreciate the try though.
Adding to my problem, if i add table command then it gives error in rename command and if remove rename command then it throws error for spath command.
Hi @Ryan.Paredez , yes, seems like the collector log was too long. If someone wants to have a look, here is the link to my logfile: https://github.com/open-telemetry/opentelemetry-collector/files/12... See more...
Hi @Ryan.Paredez , yes, seems like the collector log was too long. If someone wants to have a look, here is the link to my logfile: https://github.com/open-telemetry/opentelemetry-collector/files/12803473/collector-log.txt 
My data is coming for 0365 as JSON, I am using SPath to get the required fields after that i want to compare the data with a static list containig roles to be monitored but unforutnaly I am getting t... See more...
My data is coming for 0365 as JSON, I am using SPath to get the required fields after that i want to compare the data with a static list containig roles to be monitored but unforutnaly I am getting the below error Error in 'table' command: Invalid argument: 'role="Authentication Administrator"'   Its not working. PFA the releveant snap
There's an old answer to a similar question that might help.  https://community.splunk.com/t5/Security/Does-Splunk-support-Microsoft-Azure-AD-B2C/m-p/377039
Hi @richgalloway  My ask was  Corn schedule: every 3 hours exclude from 10pm to 7 am  We don't want to receive alert from 10 pm to 7 am
The lack of .spec files for app.conf should be unrelated to the problem you are having.  It means btool can't check the syntax of app.conf, but Splunk still can process the contents of that file.  Al... See more...
The lack of .spec files for app.conf should be unrelated to the problem you are having.  It means btool can't check the syntax of app.conf, but Splunk still can process the contents of that file.  Also, the blacklist is in inputs.conf so that's another reason why this is an unrelated issue.
See if this run-anywhere example query helps. | makeresults | eval data="1997-10-10 15:35:13.046, CREATE_DATE=\"1997-10-10 13:36:22.742479\", LAST_UPDATE_DATE=\"1997-10-10 13:36:22.74\", ACTION=\"e... See more...
See if this run-anywhere example query helps. | makeresults | eval data="1997-10-10 15:35:13.046, CREATE_DATE=\"1997-10-10 13:36:22.742479\", LAST_UPDATE_DATE=\"1997-10-10 13:36:22.74\", ACTION=\"externalFactor\", STATUS=\"info\", DATA_STRING=\"<?xml version=\"1.0\" encoding=\"UTF-8\"?><externalFactor><current>parker</current><keywordp><encrypted>true</encrypted><keywordp>******</keywordp></keywordp><boriskhan>boriskhan1-CMX_PRTY</boriskhan></externalFactor>\" 1997-10-10 15:35:13.046, CREATE_DATE=\"1997-10-10 13:03:58.388887\", LAST_UPDATE_DATE=\"1997-10-10 13:03:58.388\", ACTION=\"externalFactor.RESPONSE\", STATUS=\"info\", DATA_STRING=\"<?xml version=\"1.0\" encoding=\"UTF-8\"?><externalFactorReturn><roleName>ROLE.CustomerManager</roleName><roleName>ROLE.DataSteward</roleName><pepres>false</pepres><externalFactor>false</externalFactor><parkeristrator>true</parkeristrator><current>parker</current></externalFactorReturn>" | eval data=split(data," ") | mvexpand data | eval _raw=data | fields - data ``` Everything above sets up demo data. Delete IRL ``` ``` Extract keys and values ``` | rex max_match=0 "<(?<key>[^>]+)>(?<value>[^<]+)<\/\1>" ``` Match keys and values so they stayed paired during mvexpand ``` | eval pairs=mvzip(key,value) | mvexpand pairs ``` Separate key from value ``` | eval pairs=split(pairs,",") ``` Define key=value result ``` | eval pairs=mvindex(pairs,0) . "=" . mvindex(pairs,1) | table pairs
Hi There!    I'm having the dropdown "office" in dashboard 1 as a multiselect (full office, half office), based  on the selection it should display the results in dashboard 1,    In the dashboard 1... See more...
Hi There!    I'm having the dropdown "office" in dashboard 1 as a multiselect (full office, half office), based  on the selection it should display the results in dashboard 1,    In the dashboard 1, I have a pie chart, If i click the pie chart It need to take to dashboard 2 which consists of same dropdown "office" as multiselect (full office, half office, non-compliant office),   If in case I'm clicking pie chart of dashboard 1 when office value is full office, half office, if should shows the same in dashboard 2 and in dashboard 2 has some panels, its should the using the value.  I had configured the link already, the problem is if we are adding prefix value as " and postfix " and delimiter , it will pass the same to next dashboard 2 dropdown, so that I didn't get the result of panels in dashboard 2.    I need solution for this? Thanks, Manoj Kumar S
Hello, I would like to calculate a weighted average on an average call time. The logs I have available are of this type: I want to be able to obtain the calculation of the average time this way... See more...
Hello, I would like to calculate a weighted average on an average call time. The logs I have available are of this type: I want to be able to obtain the calculation of the average time this way The formula applied is as follows:   Here is what I have done so far: index=rcd statut=OK partenaire=000000000P | eval date_appel=strftime(_time,"%b %y") | dedup nom_ws date_appel partenaire temps_rep_max temps_rep_min temps_rep_moyen nb_appel statut tranche_heure heure_appel_max | eval nb_appel_OK=if(isnotnull(nb_appel) AND statut="OK", nb_appel, null()) | eval nb_appel_KO=if(isnotnull(nb_appel) AND statut="KO",nb_appel, null()) | eval temps_rep_min_OK=if(isnotnull(temps_rep_min) AND statut="OK", temps_rep_min, null()) | eval temps_rep_min_KO=if(isnotnull(temps_rep_min) AND statut="KO",temps_rep_min, null()) | eval temps_rep_max_OK=if(isnotnull(temps_rep_max) AND statut="OK", temps_rep_max, null()) | eval temps_rep_max_KO=if(isnotnull(temps_rep_max) AND statut="KO",temps_rep_max, null()) | eval temps_rep_moyen_OK=if(isnotnull(temps_rep_moyen) AND statut="OK", temps_rep_moyen, null()) | eval temps_rep_moyen_KO=if(isnotnull(temps_rep_moyen) AND statut="KO",temps_rep_moyen, null()) | stats sum(nb_appel_OK) as nb_appel_OK, sum(nb_appel_KO) as nb_appel_KO min(temps_rep_min_OK) as temps_rep_min_OK, min(temps_rep_min_KO) as temps_rep_min_KO max(temps_rep_max_OK) as temps_rep_max_OK, max(temps_rep_max_KO) as temps_rep_max_KO, values(temps_rep_moyen_OK) AS temps_rep_moyen_OK, values(temps_rep_moyen_KO) as temps_rep_moyen_KO values(nom_ws) as nom_ws, values(date_appel) as date_appel | eval temps_rep_moyen_KO_calcul=sum(temps_rep_moyen_KO*nb_appel_KO)/(nb_appel_KO) | eval temps_rep_moyen_OK_calcul=sum(temps_rep_moyen_OK*nb_appel_OK)/(nb_appel_OK) | fields - tranche_heure_bis , tranche_heure_partenaire | sort 0 tranche_heure |table nom_ws partenaire date_appel nb_appel_OK nb_appel_KO temps_rep_min_OK temps_rep_min_KO temps_rep_max_OK temps_rep_max_KO temps_rep_moyen_OK temps_rep_moyen_KO     I cannot get the final average_ok time displayed temps_moyen= [(nb_appel_1 * temps_moyen 1)+(nb_appel_2 * temps_moyen 2)+...)/sum of nb_appel . I really need help please. Thank you so much    
Hi can you show your props.conf for that part?  Where you have defined that extractions for field which still have that data? Is there possibility that you have 1st defined additional field and aft... See more...
Hi can you show your props.conf for that part?  Where you have defined that extractions for field which still have that data? Is there possibility that you have 1st defined additional field and after that you apply SEDCMD to raw data? That is common mistake on search time (of course you couldn't use SEDCMD on search time) that you forgot to mask/change both raw data and extracted field. r. Ismo  
No problems with permissions, diskusage ++. I think it's a global problems. I know that for some days ago I tried to setup pkcs12 certificate (estreamer)  on splunk server.   But can't remember where... See more...
No problems with permissions, diskusage ++. I think it's a global problems. I know that for some days ago I tried to setup pkcs12 certificate (estreamer)  on splunk server.   But can't remember where I did these settings.  Out form commands:  $ source /home/splunk/bin/setSplunkEnv && df -H $SPLUNK_HOME $splunk_db Tab-completion of "splunk <verb> <object>" is available. Filesystem Size Used Avail Use% Mounted on /dev/mapper/centos-home 886G 587G 300G 67% /home $ $sudo /home/splunk/bin/splunk btool indexes list volume |egrep '(\[|path)' [volume:_splunk_summaries] path = $SPLUNK_DB $df Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/centos-root 52403200 11477568 40925632 22% / devtmpfs 16312676 0 16312676 0% /dev tmpfs 16329816 0 16329816 0% /dev/shm tmpfs 16329816 10560 16319256 1% /run tmpfs 16329816 0 16329816 0% /sys/fs/cgroup /dev/sda3 1038336 173348 864988 17% /boot /dev/mapper/centos-home 865131800 558906488 306225312 65% /home tmpfs 3265964 12 3265952 1% /run/user/42 tmpfs 3265964 0 3265964 0% /run/user/1001 $