@gcusello there are no heavy forwarders. We have a syslog with UF installed on it forwards the data to our deployment server. I have written props and transforms in DS and then pushed to CM and to i...
See more...
@gcusello there are no heavy forwarders. We have a syslog with UF installed on it forwards the data to our deployment server. I have written props and transforms in DS and then pushed to CM and to indexers. Where am I doing mistake?
@Karthikeya Additionally, I have tested this in my lab, and it's working fine. Please take a look. Sample events:- rsaid="/alpha-A-01/ALPHA-CK-GSPE/v-alpha.linux.com-101" rsaid="/...
See more...
@Karthikeya Additionally, I have tested this in my lab, and it's working fine. Please take a look. Sample events:- rsaid="/alpha-A-01/ALPHA-CK-GSPE/v-alpha.linux.com-101" rsaid="/beta-B-02/BETA-CK-GSPE/v-beta.linux.com-102" rsaid="/gamma-G-03/GAMMA-CK-GSPE/v-gamma.linux.com-103" rsaid="/delta-D-04/DELTA-CK-GSPE/v-delta.linux.com-104" rsaid="/epsilon-E-05/EPSILON-CK-GSPE/v-epsilon.linux.com-105" rsaid="/zeta-Z-06/ZETA-CK-GSPE/v-zeta.linux.com-106" rsaid="/eta-H-07/ETA-CK-GSPE/v-eta.linux.com-107" rsaid="/theta-T-08/THETA-CK-GSPE/v-theta.linux.com-108" rsaid="/iota-I-09/IOTA-CK-GSPE/v-iota.linux.com-109" rsaid="/kappa-K-10/KAPPA-CK-GSPE/v-kappa.linux.com-110" rsaid="/lambda-L-11/LAMBDA-CK-GSPE/v-lambda.linux.com-111" rsaid="/mu-M-12/MU-CK-GSPE/v-mu.linux.com-112" rsaid="/nu-N-13/NU-CK-GSPE/v-nu.linux.com-113" rsaid="/xi-X-14/XI-CK-GSPE/v-xi.linux.com-114" rsaid="/omicron-O-15/OMICRON-CK-GSPE/v-omicron.linux.com-115" rsaid="/pi-P-16/PI-CK-GSPE/v-pi.linux.com-116" rsaid="/rho-R-17/RHO-CK-GSPE/v-rho.linux.com-117" rsaid="/sigma-S-18/SIGMA-CK-GSPE/v-sigma.linux.com-118" rsaid="/tau-T-19/TAU-CK-GSPE/v-tau.linux.com-119" rsaid="/upsilon-U-20/UPSILON-CK-GSPE/v-upsilon.linux.com-120" rsaid="/phi-F-21/PHI-CK-GSPE/v-phi.linux.com-121" rsaid="/chi-C-22/CHI-CK-GSPE/v-chi.linux.com-122" rsaid="/psi-PS-23/PSI-CK-GSPE/v-psi.linux.com-123" rsaid="/omega-W-24/OMEGA-CK-GSPE/v-omega.linux.com-124"
Hi @Karthikeya , two questions: did you followed the instructions at https://docs.splunk.com/Documentation/Splunk/9.4.0/Data/Configureindex-timefieldextraction ? you located the conf files on Ind...
See more...
Hi @Karthikeya , two questions: did you followed the instructions at https://docs.splunk.com/Documentation/Splunk/9.4.0/Data/Configureindex-timefieldextraction ? you located the conf files on Indexers (using the Cluster Manager), but, are tehre some intermediate Heavy Forwarders between the data sources and the Indexers? in the first case, follow the instructions. In the second case, put the conf files on the first Heavy Forwarders where data are passing through. Ciao. Giuseppe
I am trying to extract field at index time. Hence I have given following in my cluster master and pushing to indexers but field not getting extracted - transforms.conf [idname_extract] SOURCE_...
See more...
I am trying to extract field at index time. Hence I have given following in my cluster master and pushing to indexers but field not getting extracted - transforms.conf [idname_extract] SOURCE_KEY = _raw REGEX = rsaid="\/[^\/]+\/([^\/]+)\/ FORMAT = idname::$1 WRITE_META = true props.conf TRANSFORMS-1_extract_idname = idname_extract Field not getting extracted once indexing is done. But when I am getting in search extraction is happening which means my rex is correct but index time it is failing. |rex "rsaid=\"\/[^\/]+\/(?<idname>[^\/]+)\/" Raw field - rsaid="/saturn-X-01/SATURN-CK-GSPE/v-saturn.linux.com-44" Need to extract idname=SATURN-CK-GSPE at index time. Am I missing something?
i think i might have found something... i tested this by setting the phoneHomeIntervalInSecs = 3600 so that it only pulls the updates every hour and i found this REST call under REST API Referenc...
See more...
i think i might have found something... i tested this by setting the phoneHomeIntervalInSecs = 3600 so that it only pulls the updates every hour and i found this REST call under REST API Reference Manual and i tried to do so ...but the /reload does not seem to force the client to do phoneHome but i found another option in the response that i am able to use <link href="/services/deployment/client/config" rel="list"/>
<link href="/services/deployment/client/config" rel="edit"/>
<link href="/services/deployment/client/config/reload" rel="reload"/>
<link href="/services/deployment/client/config/sync" rel="sync"/>
<content type="text/xml"> it is /sync so i tried using curl -k -u username:pass -X POST https://<IP>:8089/services/deployment/client/deployment-client/sync and it is helping me to have the client do phoneHome when i hit this url. it also pulled the app updates which i did. This worked for me .
@uagraw01 This "Bad Allocation" error often indicates that the server is running out of memory while processing the request. It can occur during large searches or when the server's memory resources...
See more...
@uagraw01 This "Bad Allocation" error often indicates that the server is running out of memory while processing the request. It can occur during large searches or when the server's memory resources are insufficient. This error "HTTP Status 400 (Bad Request)" typically means that the request sent to the server was malformed or incorrect in some way. You might want to check the request syntax and ensure all required parameters are correctly formatted. Check the below resources : https://community.splunk.com/t5/Reporting/Searches-fail-with-quot-bad-allocation-quot-error/m-p/197630 https://docs.splunk.com/Documentation/Splunk/9.4.0/SearchReference/Noop https://docs.splunk.com/Documentation/Splunk/9.4.0/Search/Comments I would recommend you raise a support ticket.
Hello I have xml messages in search. row like this <log><local_time>2025-02-25T15:02:59:955059+05:00</local_time><bik>ATYNKZKA</bik><fileName>stmt_3110449968.pdf</fileName><size>555468</s...
See more...
Hello I have xml messages in search. row like this <log><local_time>2025-02-25T15:02:59:955059+05:00</local_time><bik>ATYNKZKA</bik><fileName>stmt_3110449968.pdf</fileName><size>555468</size><iin>800716350670</iin><agrementNumber>3110449968</agrementNumber><agrementDate>08.11.2011</agrementDate><referenceId>HKBRZA0000388473</referenceId><bankCode>ALTYNDBZ</bankCode><result>OK</result></log> <log><local_time>2025-02-25T15:02:59:885557+05:00</local_time><bik>ATYNKZKA</bik><fileName>stmt_dbz.pdf</fileName><size>152868</size><iin>840625302683</iin><agrementNumber>4301961740</agrementNumber><agrementDate>21.06.2023</agrementDate><referenceId>HKBRZA0000388476</referenceId><bankCode>ALTYNDBZ</bankCode><result>OK</result></log> I see after search in field in '_time' and 'log.local_time' date time with seconds and parts. Seems to be OK But when i try build timechart i see next Seems to be timechart don't know about minutes and seconds. And know only hours. My span=5m is ignored. For me it is ok using _time or log.local_time I try various method parse with strptime but useless thanks
Hello Splunkers!! I am writing to bring to your attention a critical issue we are experiencing following our recent migration of Splunk from version 8.1.1 to 9.1.1. During our routine operati...
See more...
Hello Splunkers!! I am writing to bring to your attention a critical issue we are experiencing following our recent migration of Splunk from version 8.1.1 to 9.1.1. During our routine operations, specifically while attempting to schedule reports from the dashboard using the noop command, we have encountered a "FATAL" error with the message indicating a "bad allocation." Server reported HTTP status=400 while getting mode=resultsb'\n\n \n bad allocation\n \n\n Please help me get it fix.
Hello recently I moved ES app from one sh to another non clustered sh . after that this error is coming Error in 'DataModelCache': Invalid or unaccelerable root object for datamodel
Thanks for your help, really appreciated! As per the below Screenshot: Convert real-time searches into scheduled searches. is real time = Ad-hoc? Could you please assist in differentiat...
See more...
Thanks for your help, really appreciated! As per the below Screenshot: Convert real-time searches into scheduled searches. is real time = Ad-hoc? Could you please assist in differentiate the difference between the (Historical - Realtime - Summarization - Ad-hoc) Searches?
Hi all, just to notify you that this issue has been deeply troubleshooted with customer support and finally the fix should be included in the future release of Splunk 9.4.2. Regards
Hi @chenfan , let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the...
See more...
Hi @chenfan , let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Here is my final configuration to resolve this issue. I hope it assists anyone experiencing a similar problem in finding a solution. UF --> $SPLUNK/etc/system/local/server.conf [queue=parsingQueu...
See more...
Here is my final configuration to resolve this issue. I hope it assists anyone experiencing a similar problem in finding a solution. UF --> $SPLUNK/etc/system/local/server.conf [queue=parsingQueue] maxSize = 20MB [general] parallelIngestionPipelines = 4
We have an index named ABC with a searchable retention period of 180 days and an archival period of 3 years. I would like to transfer all logs to AWS S3, as they are currently stored in Splunk Archiv...
See more...
We have an index named ABC with a searchable retention period of 180 days and an archival period of 3 years. I would like to transfer all logs to AWS S3, as they are currently stored in Splunk Archive storage. Could you please advise on how to accomplish this? Additionally, will this process include moving both searchable logs and archived logs to S3? I would appreciate a step-by-step guide. If anyone has knowledge of this process, I would be grateful for your assistance. Thank you.